[jira] [Commented] (HBASE-16053) Master code is not setting the table in ENABLING state in create table

2016-06-16 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335552#comment-15335552
 ] 

Matteo Bertozzi commented on HBASE-16053:
-

+1

> Master code is not setting the table in ENABLING state in create table
> --
>
> Key: HBASE-16053
> URL: https://issues.apache.org/jira/browse/HBASE-16053
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: hbase-16053_v1.patch
>
>
> Unit test logs are filled with the following, because in master, unlike 
> branch-1, we are missing the code which sets the table in ENABLING mode 
> before assignment in CreateTableProcedure. 
> {code}
> 2016-06-10 17:48:15,832 ERROR 
> [B.defaultRpcServer.handler=0,queue=0,port=60448] 
> master.TableStateManager(134): Unable to get table testRegionCache state
> org.apache.hadoop.hbase.TableNotFoundException: testRegionCache
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2320)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2900)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1334)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2273)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:116)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:138)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:113)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16055) PutSortReducer loses any Visibility/acl attribute set on the Puts

2016-06-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16055:
---
Component/s: security

> PutSortReducer loses any Visibility/acl attribute set on the Puts 
> --
>
> Key: HBASE-16055
> URL: https://issues.apache.org/jira/browse/HBASE-16055
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0
>
>
> Based on a user discussion, I think as the user pointed out rightly, when a 
> PutSortReducer is used any visibility attribute or external attribute set on 
> the Put will be lost as we create KVs out of the cells in the puts whereas 
> the ACL and visibility are all set as Attributes. 
> In TextSortReducer we tend to read the information we tend to read the 
> information from the parsed line but here in PutSortReducer we don't do it. I 
> think this problem should be in all the existing versions where we support 
> Tags. Correct me if am wrong here. 
> [~anoop.hbase], [~andrew.purt...@gmail.com]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16055) PutSortReducer loses any Visibility/acl attribute set on the Puts

2016-06-16 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-16055:
--

 Summary: PutSortReducer loses any Visibility/acl attribute set on 
the Puts 
 Key: HBASE-16055
 URL: https://issues.apache.org/jira/browse/HBASE-16055
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 2.0.0


Based on a user discussion, I think as the user pointed out rightly, when a 
PutSortReducer is used any visibility attribute or external attribute set on 
the Put will be lost as we create KVs out of the cells in the puts whereas the 
ACL and visibility are all set as Attributes. 
In TextSortReducer we tend to read the information we tend to read the 
information from the parsed line but here in PutSortReducer we don't do it. I 
think this problem should be in all the existing versions where we support 
Tags. Correct me if am wrong here. 
[~anoop.hbase], [~andrew.purt...@gmail.com]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16054) OutOfMemory exception when use AsyncRpcClient with encryption

2016-06-16 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HBASE-16054:
-
Status: Patch Available  (was: Open)

> OutOfMemory exception when use AsyncRpcClient with encryption
> -
>
> Key: HBASE-16054
> URL: https://issues.apache.org/jira/browse/HBASE-16054
> Project: HBase
>  Issue Type: Bug
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HBASE-16054.001.patch
>
>
> Test the AsyncRpcClient with encryption in infinity loop, will get the OOM 
> exception like the following:
> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 29106160 
> byte(s) of direct memory (used: 2607219760, max: 2609905664)
>   at 
> io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:592)
>   at 
> io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:546)
>   at 
> io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:699)
>   at 
> io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:695)
>   at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:246)
>   at io.netty.buffer.PoolArena.allocate(PoolArena.java:224)
>   at io.netty.buffer.PoolArena.allocate(PoolArena.java:141)
>   at 
> io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:262)
>   at 
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
>   at 
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170)
>   at 
> io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:107)
>   at 
> org.apache.hadoop.hbase.security.SaslClientHandler.write(SaslClientHandler.java:328)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:724)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:716)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:36)
>   at 
> io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1064)
>   at 
> io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:)
>   at 
> io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1049)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:393)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16054) OutOfMemory exception when use AsyncRpcClient with encryption

2016-06-16 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HBASE-16054:
-
Attachment: HBASE-16054.001.patch

> OutOfMemory exception when use AsyncRpcClient with encryption
> -
>
> Key: HBASE-16054
> URL: https://issues.apache.org/jira/browse/HBASE-16054
> Project: HBase
>  Issue Type: Bug
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HBASE-16054.001.patch
>
>
> Test the AsyncRpcClient with encryption in infinity loop, will get the OOM 
> exception like the following:
> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 29106160 
> byte(s) of direct memory (used: 2607219760, max: 2609905664)
>   at 
> io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:592)
>   at 
> io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:546)
>   at 
> io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:699)
>   at 
> io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:695)
>   at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:246)
>   at io.netty.buffer.PoolArena.allocate(PoolArena.java:224)
>   at io.netty.buffer.PoolArena.allocate(PoolArena.java:141)
>   at 
> io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:262)
>   at 
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
>   at 
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170)
>   at 
> io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:107)
>   at 
> org.apache.hadoop.hbase.security.SaslClientHandler.write(SaslClientHandler.java:328)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:724)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:716)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:36)
>   at 
> io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1064)
>   at 
> io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:)
>   at 
> io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1049)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:393)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16054) OutOfMemory exception when use AsyncRpcClient with encryption

2016-06-16 Thread Colin Ma (JIRA)
Colin Ma created HBASE-16054:


 Summary: OutOfMemory exception when use AsyncRpcClient with 
encryption
 Key: HBASE-16054
 URL: https://issues.apache.org/jira/browse/HBASE-16054
 Project: HBase
  Issue Type: Bug
Reporter: Colin Ma
Assignee: Colin Ma


Test the AsyncRpcClient with encryption in infinity loop, will get the OOM 
exception like the following:

io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 29106160 
byte(s) of direct memory (used: 2607219760, max: 2609905664)
at 
io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:592)
at 
io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:546)
at 
io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:699)
at 
io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:695)
at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:246)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:224)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:141)
at 
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:262)
at 
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
at 
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170)
at 
io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:107)
at 
org.apache.hadoop.hbase.security.SaslClientHandler.write(SaslClientHandler.java:328)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:724)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:716)
at 
io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:36)
at 
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1064)
at 
io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:)
at 
io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1049)
at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:393)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15600) Add provision for adding mutations to memstore or able to write to same region in batchMutate coprocessor hooks

2016-06-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335533#comment-15335533
 ] 

Mikhail Antonov commented on HBASE-15600:
-

Oops, sorry - didn't notice it's been committed before. Sorry for the noise.

> Add provision for adding mutations to memstore or able to write to same 
> region in batchMutate coprocessor hooks
> ---
>
> Key: HBASE-15600
> URL: https://issues.apache.org/jira/browse/HBASE-15600
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: phoenix
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-15600.patch, HBASE-15600_v1.patch, 
> HBASE-15600_v2.patch, hbase-15600_v3.patch, hbase-15600_v4.patch, 
> hbase-15600_v5.patch, hbase-15600_v6.patch
>
>
> As part of PHOENIX-1734 we need to write the index updates to same region 
> from coprocessors but writing from batchMutate API is not allowed because of 
> mvcc. 
> Raised PHOENIX-2742 to discuss any alternative way to write to the same 
> region directly or not but not having any proper solution there.
> Currently we have provision to write wal edits from coprocessors. We can set 
> wal edits in MiniBatchOperationInProgress.
> {noformat}
>   /**
>* Sets the walEdit for the operation(Mutation) at the specified position.
>* @param index
>* @param walEdit
>*/
>   public void setWalEdit(int index, WALEdit walEdit) {
> this.walEditsFromCoprocessors[getAbsoluteIndex(index)] = walEdit;
>   }
> {noformat}
> Similarly we can allow to write mutations from coprocessors to memstore as 
> well. Or else we should provide the batch mutation API allow write in batch 
> mutate coprocessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15600) Add provision for adding mutations to memstore or able to write to same region in batchMutate coprocessor hooks

2016-06-16 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15600:

Fix Version/s: 1.3.0

> Add provision for adding mutations to memstore or able to write to same 
> region in batchMutate coprocessor hooks
> ---
>
> Key: HBASE-15600
> URL: https://issues.apache.org/jira/browse/HBASE-15600
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: phoenix
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-15600.patch, HBASE-15600_v1.patch, 
> HBASE-15600_v2.patch, hbase-15600_v3.patch, hbase-15600_v4.patch, 
> hbase-15600_v5.patch, hbase-15600_v6.patch
>
>
> As part of PHOENIX-1734 we need to write the index updates to same region 
> from coprocessors but writing from batchMutate API is not allowed because of 
> mvcc. 
> Raised PHOENIX-2742 to discuss any alternative way to write to the same 
> region directly or not but not having any proper solution there.
> Currently we have provision to write wal edits from coprocessors. We can set 
> wal edits in MiniBatchOperationInProgress.
> {noformat}
>   /**
>* Sets the walEdit for the operation(Mutation) at the specified position.
>* @param index
>* @param walEdit
>*/
>   public void setWalEdit(int index, WALEdit walEdit) {
> this.walEditsFromCoprocessors[getAbsoluteIndex(index)] = walEdit;
>   }
> {noformat}
> Similarly we can allow to write mutations from coprocessors to memstore as 
> well. Or else we should provide the batch mutation API allow write in batch 
> mutate coprocessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15600) Add provision for adding mutations to memstore or able to write to same region in batchMutate coprocessor hooks

2016-06-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335529#comment-15335529
 ] 

Mikhail Antonov commented on HBASE-15600:
-

Kicked out of 1.3

> Add provision for adding mutations to memstore or able to write to same 
> region in batchMutate coprocessor hooks
> ---
>
> Key: HBASE-15600
> URL: https://issues.apache.org/jira/browse/HBASE-15600
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: phoenix
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-15600.patch, HBASE-15600_v1.patch, 
> HBASE-15600_v2.patch, hbase-15600_v3.patch, hbase-15600_v4.patch, 
> hbase-15600_v5.patch, hbase-15600_v6.patch
>
>
> As part of PHOENIX-1734 we need to write the index updates to same region 
> from coprocessors but writing from batchMutate API is not allowed because of 
> mvcc. 
> Raised PHOENIX-2742 to discuss any alternative way to write to the same 
> region directly or not but not having any proper solution there.
> Currently we have provision to write wal edits from coprocessors. We can set 
> wal edits in MiniBatchOperationInProgress.
> {noformat}
>   /**
>* Sets the walEdit for the operation(Mutation) at the specified position.
>* @param index
>* @param walEdit
>*/
>   public void setWalEdit(int index, WALEdit walEdit) {
> this.walEditsFromCoprocessors[getAbsoluteIndex(index)] = walEdit;
>   }
> {noformat}
> Similarly we can allow to write mutations from coprocessors to memstore as 
> well. Or else we should provide the batch mutation API allow write in batch 
> mutate coprocessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15600) Add provision for adding mutations to memstore or able to write to same region in batchMutate coprocessor hooks

2016-06-16 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15600:

Fix Version/s: (was: 1.3.0)

> Add provision for adding mutations to memstore or able to write to same 
> region in batchMutate coprocessor hooks
> ---
>
> Key: HBASE-15600
> URL: https://issues.apache.org/jira/browse/HBASE-15600
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: phoenix
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-15600.patch, HBASE-15600_v1.patch, 
> HBASE-15600_v2.patch, hbase-15600_v3.patch, hbase-15600_v4.patch, 
> hbase-15600_v5.patch, hbase-15600_v6.patch
>
>
> As part of PHOENIX-1734 we need to write the index updates to same region 
> from coprocessors but writing from batchMutate API is not allowed because of 
> mvcc. 
> Raised PHOENIX-2742 to discuss any alternative way to write to the same 
> region directly or not but not having any proper solution there.
> Currently we have provision to write wal edits from coprocessors. We can set 
> wal edits in MiniBatchOperationInProgress.
> {noformat}
>   /**
>* Sets the walEdit for the operation(Mutation) at the specified position.
>* @param index
>* @param walEdit
>*/
>   public void setWalEdit(int index, WALEdit walEdit) {
> this.walEditsFromCoprocessors[getAbsoluteIndex(index)] = walEdit;
>   }
> {noformat}
> Similarly we can allow to write mutations from coprocessors to memstore as 
> well. Or else we should provide the batch mutation API allow write in batch 
> mutate coprocessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15982) Interface ReplicationEndpoint extends Guava's Service

2016-06-16 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15982:

Fix Version/s: (was: 1.3.0)

> Interface ReplicationEndpoint extends Guava's Service
> -
>
> Key: HBASE-15982
> URL: https://issues.apache.org/jira/browse/HBASE-15982
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
>
> We have Guava's Service leaking into the LimitedPrivate interface 
> ReplicationEndpoint:
> {code}
> public interface ReplicationEndpoint extends Service, 
> ReplicationPeerConfigListener
> {code}
> This required a private patch when I updated Guava for our internal 
> deployments. This is going to be a problem for us for long term maintenance 
> and implenters of pluggable replication endpoints. LP is only less than 
> public by a degree. We shouldn't leak types from third part code into either 
> Public or LP APIs in my opinion. Let's fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15982) Interface ReplicationEndpoint extends Guava's Service

2016-06-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335512#comment-15335512
 ] 

Mikhail Antonov commented on HBASE-15982:
-

At this point I'd say- let's just not make it on 1.3. 

New good changes come in every day, but just at some point we have to draw the 
line in the sand and say that from now on - sorry, let's make no changes before 
the RC but critical fixes plus patches needed to stabilize the branch (test 
fixes, broken/flaky tests etc).

Since it's only LP interface I suppose it doesn't justify it to be critical, 
but please correct me if I'm missing something.

> Interface ReplicationEndpoint extends Guava's Service
> -
>
> Key: HBASE-15982
> URL: https://issues.apache.org/jira/browse/HBASE-15982
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
>
> We have Guava's Service leaking into the LimitedPrivate interface 
> ReplicationEndpoint:
> {code}
> public interface ReplicationEndpoint extends Service, 
> ReplicationPeerConfigListener
> {code}
> This required a private patch when I updated Guava for our internal 
> deployments. This is going to be a problem for us for long term maintenance 
> and implenters of pluggable replication endpoints. LP is only less than 
> public by a degree. We shouldn't leak types from third part code into either 
> Public or LP APIs in my opinion. Let's fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13336) Consistent rules for security meta table protections

2016-06-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335499#comment-15335499
 ] 

Mikhail Antonov commented on HBASE-13336:
-

[~apurtell] sorry, never got my hands on it. If it's relevant I probably would 
just unassign it from myself and leave it here, otherwise we can resolve?

> Consistent rules for security meta table protections
> 
>
> Key: HBASE-13336
> URL: https://issues.apache.org/jira/browse/HBASE-13336
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Mikhail Antonov
> Fix For: 2.0.0
>
> Attachments: HBASE-13336.patch, HBASE-13336_v2.patch
>
>
> The AccessController and VisibilityController do different things regarding 
> protecting their meta tables. The AC allows schema changes and disable/enable 
> if the user has permission. The VC unconditionally disallows all admin 
> actions. Generally, bad things will happen if these meta tables are damaged, 
> disabled, or dropped. The likely outcome is random frequent (or constant) 
> server side op failures with nasty stack traces. On the other hand some 
> things like column family and table attribute changes can have valid use 
> cases. We should have consistent and sensible rules for protecting security 
> meta tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16053) Master code is not setting the table in ENABLING state in create table

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335485#comment-15335485
 ] 

Hadoop QA commented on HBASE-16053:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
38s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
42m 37s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 111m 31s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
34s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 188m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
|   | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811269/hbase-16053_v1.patch |
| JIRA Issue | HBASE-16053 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux pomona.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT 
Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-perso

[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335486#comment-15335486
 ] 

stack commented on HBASE-14743:
---

Thanks for changing name to Memory. I tried the latest patch. I loaded data but 
all metrics are zero in jvisualvm looking at the memory bean. Is your 
experience different? I just build w/ the patch then launch local instance of 
hbase with ./bin/hbase start and then I launch jvisualvm and look at the bean 
while I put some data in and flush. Thanks.

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.001.patch, HBASE-14743.002.patch, 
> HBASE-14743.003.patch, HBASE-14743.004.patch, HBASE-14743.005.patch, 
> HBASE-14743.006.patch, HBASE-14743.007.patch, HBASE-14743.008.patch, 
> HBASE-14743.009.patch, HBASE-14743.009.rw3.patch, HBASE-14743.009.v2.patch, 
> Screen Shot 2016-06-16 at 5.39.13 PM.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15806) An endpoint-based export tool

2016-06-16 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335458#comment-15335458
 ] 

Jerry He commented on HBASE-15806:
--

Ok.  You can have a separate JIRA to enhance that.

> An endpoint-based export tool
> -
>
> Key: HBASE-15806
> URL: https://issues.apache.org/jira/browse/HBASE-15806
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: Experiment.png, HBASE-15806-v1.patch, 
> HBASE-15806-v2.patch, HBASE-15806-v3.patch, HBASE-15806.patch
>
>
> The time for exporting table can be reduced, if we use the endpoint technique 
> to export the hdfs files by the region server rather than by hbase client.
> In my experiments, the elapsed time of endpoint-based export can be less than 
> half of current export tool (enable the hdfs compression)
> But the shortcomings is we need to alter table for deploying the endpoint
> any comments about this? thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum

2016-06-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335455#comment-15335455
 ] 

Nick Dimiduk commented on HBASE-11625:
--

Sorry for the delay. Really nice work on this one [~appy], +1. I'll appreciate 
any extra testing you might be able to do around this feature at RC time :)

> Reading datablock throws "Invalid HFile block magic" and can not switch to 
> hdfs checksum 
> -
>
> Key: HBASE-11625
> URL: https://issues.apache.org/jira/browse/HBASE-11625
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.21, 0.98.4, 0.98.5, 1.0.1.1, 1.0.3
>Reporter: qian wang
>Assignee: Appy
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2
>
> Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz, 
> HBASE-11625-branch-1-v1.patch, HBASE-11625-branch-1.2-v1.patch, 
> HBASE-11625-branch-1.2-v2.patch, HBASE-11625-branch-1.2-v3.patch, 
> HBASE-11625-branch-1.2-v4.patch, HBASE-11625-master-v2.patch, 
> HBASE-11625-master-v3.patch, HBASE-11625-master.patch, 
> HBASE-11625.branch-1.1.001.patch, HBASE-11625.patch, correct-hfile, 
> corrupted-header-hfile
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it 
> could happen file corruption but it only can switch to hdfs checksum 
> inputstream till validateBlockChecksum(). If the datablock's header corrupted 
> when b = new HFileBlock(),it throws the exception "Invalid HFile block magic" 
> and the rpc call fail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335454#comment-15335454
 ] 

Hudson commented on HBASE-5291:
---

FAILURE: Integrated in HBase-1.4 #220 (See 
[https://builds.apache.org/job/HBase-1.4/220/])
HBASE-5291 Addendum 2 passes correct path to deleteRecursively (tedyu: rev 
45a0fc531a3d35edc78e9c60ef93bc7538cf4b30)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/http/HttpServerFunctionalTest.java


> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 5291-addendum.2, HBASE-5291-addendum.patch, 
> HBASE-5291.001.patch, HBASE-5291.002.patch, HBASE-5291.003.patch, 
> HBASE-5291.004.patch, HBASE-5291.005-0.98.patch, 
> HBASE-5291.005-branch-1.patch, HBASE-5291.005.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15600) Add provision for adding mutations to memstore or able to write to same region in batchMutate coprocessor hooks

2016-06-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335434#comment-15335434
 ] 

Nick Dimiduk commented on HBASE-15600:
--

I'm struggling examples to cite, but I think we've already said that LP.Coproc 
APIs are quasi-exempt from our semantic-ish versioning guidelines. We want to 
be a hospitable environment for extension projects like Phoenix (and Trafodion, 
and..), all of which by design are monkeying in HBase guts to provide a better 
user experience. I'm definitely sympathetic to the position of Phoenix having 
to explain different semantics for the same feature, on the same Phoenix 
release, based on which HBase version a user is running. But does that mean a 
new release of HBase 1.0 would be required if Phoenix "demands" it? I'm not 
sure where to draw the line on accommodation.

Yeah, sounds like a dev@ discussion is in order. I think a better understanding 
of how sensitive Phoenix is to HBase patch releases, a la 
[PHOENIX-2732|https://issues.apache.org/jira/browse/PHOENIX-2732?focusedCommentId=15335405&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15335405],
 would help fuel this discussion.

Also [~enis], I'm curious if Phoenix v4.7.0 release runs on an HBase 1.1.x + 
this patch. That will be a good indicator as to the backward-compatibility 
implications of the attached. Shy of a compat report, that is.

> Add provision for adding mutations to memstore or able to write to same 
> region in batchMutate coprocessor hooks
> ---
>
> Key: HBASE-15600
> URL: https://issues.apache.org/jira/browse/HBASE-15600
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: phoenix
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: HBASE-15600.patch, HBASE-15600_v1.patch, 
> HBASE-15600_v2.patch, hbase-15600_v3.patch, hbase-15600_v4.patch, 
> hbase-15600_v5.patch, hbase-15600_v6.patch
>
>
> As part of PHOENIX-1734 we need to write the index updates to same region 
> from coprocessors but writing from batchMutate API is not allowed because of 
> mvcc. 
> Raised PHOENIX-2742 to discuss any alternative way to write to the same 
> region directly or not but not having any proper solution there.
> Currently we have provision to write wal edits from coprocessors. We can set 
> wal edits in MiniBatchOperationInProgress.
> {noformat}
>   /**
>* Sets the walEdit for the operation(Mutation) at the specified position.
>* @param index
>* @param walEdit
>*/
>   public void setWalEdit(int index, WALEdit walEdit) {
> this.walEditsFromCoprocessors[getAbsoluteIndex(index)] = walEdit;
>   }
> {noformat}
> Similarly we can allow to write mutations from coprocessors to memstore as 
> well. Or else we should provide the batch mutation API allow write in batch 
> mutate coprocessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-16 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335432#comment-15335432
 ] 

Yu Li commented on HBASE-16032:
---

Thanks for review [~ted_yu], will commit soon if no objections.

> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-16 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335429#comment-15335429
 ] 

Yu Li commented on HBASE-16032:
---

Thanks [~busbey]!

> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16048) Tag InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC)

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335411#comment-15335411
 ] 

Hadoop QA commented on HBASE-16048:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
34s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 33s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 41s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 135m 27s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811259/16048.v1.txt |
| JIRA Issue | HBASE-16048 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenki

[jira] [Commented] (HBASE-15982) Interface ReplicationEndpoint extends Guava's Service

2016-06-16 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335377#comment-15335377
 ] 

Nick Dimiduk commented on HBASE-15982:
--

Yeah, I would be opposed to such a change on branch-1.1.

> Interface ReplicationEndpoint extends Guava's Service
> -
>
> Key: HBASE-15982
> URL: https://issues.apache.org/jira/browse/HBASE-15982
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
>
> We have Guava's Service leaking into the LimitedPrivate interface 
> ReplicationEndpoint:
> {code}
> public interface ReplicationEndpoint extends Service, 
> ReplicationPeerConfigListener
> {code}
> This required a private patch when I updated Guava for our internal 
> deployments. This is going to be a problem for us for long term maintenance 
> and implenters of pluggable replication endpoints. LP is only less than 
> public by a degree. We shouldn't leak types from third part code into either 
> Public or LP APIs in my opinion. Let's fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16048) Tag InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC)

2016-06-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335375#comment-15335375
 ] 

Ted Yu commented on HBASE-16048:


[~mantonov]:
Do you want this in 1.3 release ?

> Tag InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC) 
> ---
>
> Key: HBASE-16048
> URL: https://issues.apache.org/jira/browse/HBASE-16048
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Attachments: 16048.v1.txt
>
>
> Some methods (preCompact, preCompactScannerOpen, preFlush, 
> preFlushScannerOpen, etc) of BaseRegionObserver take InternalScanner as input 
> argument as well as the return type.
> BaseRegionObserver is tagged with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) but 
> InternalScanner is tagged with @InterfaceAudience.Private.
> This JIRA is to discuss tagging InternalScanner with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16048) Tag InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC)

2016-06-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16048:
---
Summary: Tag InternalScanner with 
LimitedPrivate(HBaseInterfaceAudience.COPROC)   (was: Consider tagging 
InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC) )

> Tag InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC) 
> ---
>
> Key: HBASE-16048
> URL: https://issues.apache.org/jira/browse/HBASE-16048
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Attachments: 16048.v1.txt
>
>
> Some methods (preCompact, preCompactScannerOpen, preFlush, 
> preFlushScannerOpen, etc) of BaseRegionObserver take InternalScanner as input 
> argument as well as the return type.
> BaseRegionObserver is tagged with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) but 
> InternalScanner is tagged with @InterfaceAudience.Private.
> This JIRA is to discuss tagging InternalScanner with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335343#comment-15335343
 ] 

Hadoop QA commented on HBASE-14743:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 7s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 82m 8s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
41s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 132m 24s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811261/HBASE-14743.009.rw3.patch
 |
| JIRA Issue | HBASE-14743 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  co

[jira] [Commented] (HBASE-15806) An endpoint-based export tool

2016-06-16 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335338#comment-15335338
 ] 

ChiaPing Tsai commented on HBASE-15806:
---

I ignore the user token, so hbase must have permission to access the output 
directory.

> An endpoint-based export tool
> -
>
> Key: HBASE-15806
> URL: https://issues.apache.org/jira/browse/HBASE-15806
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: Experiment.png, HBASE-15806-v1.patch, 
> HBASE-15806-v2.patch, HBASE-15806-v3.patch, HBASE-15806.patch
>
>
> The time for exporting table can be reduced, if we use the endpoint technique 
> to export the hdfs files by the region server rather than by hbase client.
> In my experiments, the elapsed time of endpoint-based export can be less than 
> half of current export tool (enable the hdfs compression)
> But the shortcomings is we need to alter table for deploying the endpoint
> any comments about this? thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16049) TestRowProcessorEndpoint is failing on Apache Builds

2016-06-16 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335323#comment-15335323
 ] 

Guanghao Zhang commented on HBASE-16049:


The last fail log:
https://builds.apache.org/job/HBase-Flaky-Tests/1335/testReport/junit/org.apache.hadoop.hbase.coprocessor/TestRowProcessorEndpoint/testMultipleRows/

The test results show many CallQueueTooBigException.  The default queue length 
is 30 (default handler count) * 10 (default queue length per handler). So we 
should set RpcScheduler.IPC_SERVER_MAX_CALLQUEUE_LENGTH to a bigger value 
before the unit test.

> TestRowProcessorEndpoint is failing on Apache Builds
> 
>
> Key: HBASE-16049
> URL: https://issues.apache.org/jira/browse/HBASE-16049
> Project: HBase
>  Issue Type: Bug
>Reporter: Mikhail Antonov
>
> example log 
> https://paste.apache.org/46Uh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16048) Consider tagging InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC)

2016-06-16 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335315#comment-15335315
 ] 

Joep Rottinghuis commented on HBASE-16048:
--

Thanks [~apurtell] and [~yuzhih...@gmail.com] 16048.v1.txt looks good to me.

> Consider tagging InternalScanner with 
> LimitedPrivate(HBaseInterfaceAudience.COPROC) 
> 
>
> Key: HBASE-16048
> URL: https://issues.apache.org/jira/browse/HBASE-16048
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Attachments: 16048.v1.txt
>
>
> Some methods (preCompact, preCompactScannerOpen, preFlush, 
> preFlushScannerOpen, etc) of BaseRegionObserver take InternalScanner as input 
> argument as well as the return type.
> BaseRegionObserver is tagged with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) but 
> InternalScanner is tagged with @InterfaceAudience.Private.
> This JIRA is to discuss tagging InternalScanner with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16045) endtime argument for VerifyReplication was incorrectly specified in usage

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335313#comment-15335313
 ] 

Hudson commented on HBASE-16045:


FAILURE: Integrated in HBase-Trunk_matrix #1061 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1061/])
HBASE-16045 endtime argument for VerifyReplication was incorrectly (tedyu: rev 
d8902ba0e68ec7bc38a8aa8d212353c380e5d378)
* src/main/asciidoc/_chapters/ops_mgt.adoc
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java


> endtime argument for VerifyReplication was incorrectly specified in usage
> -
>
> Key: HBASE-16045
> URL: https://issues.apache.org/jira/browse/HBASE-16045
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16045.v1.txt, 16045.v2.txt, 16045.v3.txt
>
>
> Working on a customer case where the following was given for verifyrep:
> {code}
> --starttime=145679040 \ 
> --stoptime=145687680
> {code}
> Customer complained that the timestamp of a (sample) row reported as 
> ONLY_IN_PEER_TABLE_ROWS corresponded to time outside the time range.
> The code says:
> {code}
> final String endTimeArgKey = "--endtime=";
> {code}
> It turns out that usage String was wrong:
> {code}
> System.err.println("Usage: verifyrep [--starttime=X]" +
> " [--stoptime=Y] [--families=A]  ");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335312#comment-15335312
 ] 

Hudson commented on HBASE-5291:
---

FAILURE: Integrated in HBase-Trunk_matrix #1061 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1061/])
HBASE-5291 Addendum 2 passes correct path to deleteRecursively (tedyu: rev 
6d0e0e3721fd7a0c020ce5c746c9369cb4220393)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/http/HttpServerFunctionalTest.java


> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 5291-addendum.2, HBASE-5291-addendum.patch, 
> HBASE-5291.001.patch, HBASE-5291.002.patch, HBASE-5291.003.patch, 
> HBASE-5291.004.patch, HBASE-5291.005-0.98.patch, 
> HBASE-5291.005-branch-1.patch, HBASE-5291.005.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15429) Add a split policy for busy regions

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335314#comment-15335314
 ] 

Hudson commented on HBASE-15429:


FAILURE: Integrated in HBase-Trunk_matrix #1061 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1061/])
HBASE-15429 Add split policy for busy regions (eclark: rev 
3abd52bdc6db926d930fd94cfce5bd7ba6fd005f)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/BusyRegionSplitPolicy.java
* hbase-common/src/main/resources/hbase-default.xml
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java


> Add a split policy for busy regions
> ---
>
> Key: HBASE-15429
> URL: https://issues.apache.org/jira/browse/HBASE-15429
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15429-V1.patch, HBASE-15429-V2.patch, 
> HBASE-15429.patch
>
>
> Currently, all region split policies are based on size. However, in certain 
> cases, it is a wise choice to make a split decision based on number of 
> requests to the region and split busy regions.
> A crude metric is that if a region blocks writes often and throws 
> RegionTooBusyExceoption, it's probably a good idea to split it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16032) Possible memory leak in StoreScanner

2016-06-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335299#comment-15335299
 ] 

Ted Yu commented on HBASE-16032:


Patch v3 look good.

> Possible memory leak in StoreScanner
> 
>
> Key: HBASE-16032
> URL: https://issues.apache.org/jira/browse/HBASE-16032
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.1.6, 1.3.1, 0.98.21, 1.2.3
>
> Attachments: HBASE-16032.patch, HBASE-16032_v2.patch, 
> HBASE-16032_v3.patch
>
>
> We observed frequent fullGC of RS in our production environment, and after 
> analyzing the heapdump, we found large memory occupancy by 
> HStore#changedReaderObservers, the map is surprisingly containing 7500w 
> objects...
> After some debugging, I located some possible memory leak in StoreScanner 
> constructor:
> {code}
>   public StoreScanner(Store store, ScanInfo scanInfo, Scan scan, final 
> NavigableSet columns,
>   long readPt)
>   throws IOException {
> this(store, scan, scanInfo, columns, readPt, scan.getCacheBlocks());
> if (columns != null && scan.isRaw()) {
>   throw new DoNotRetryIOException("Cannot specify any column for a raw 
> scan");
> }
> matcher = new ScanQueryMatcher(scan, scanInfo, columns,
> ScanType.USER_SCAN, Long.MAX_VALUE, HConstants.LATEST_TIMESTAMP,
> oldestUnexpiredTS, now, store.getCoprocessorHost());
> this.store.addChangedReaderObserver(this);
> // Pass columns to try to filter out unnecessary StoreFiles.
> List scanners = getScannersNoCompaction();
> ...
> seekScanners(scanners, matcher.getStartKey(), explicitColumnQuery
> && lazySeekEnabledGlobally, parallelSeekEnabled);
> ...
> resetKVHeap(scanners, store.getComparator());
>   }
> {code}
> If there's any Exception thrown after 
> {{this.store.addChangedReaderObserver(this)}}, the returned scanner might be 
> null and there's no chance to remove the scanner from changedReaderObservers, 
> like in {{HRegion#get}}
> {code}
> RegionScanner scanner = null;
> try {
>   scanner = getScanner(scan);
>   scanner.next(results);
> } finally {
>   if (scanner != null)
> scanner.close();
> }
> {code}
> What's more, all exception thrown in the {{HRegion#getScanner}} path will 
> cause scanner==null then memory leak, so we also need to handle this part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15619) Performance regression observed: Empty random read(get) performance of branch-1 worse than 0.98

2016-06-16 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335287#comment-15335287
 ] 

Yu Li commented on HBASE-15619:
---

I see, then HBASE-15971 is more likely to resolve the problem here since I did 
the test with multiple YCSB instances.:-)

I'll for sure try HBASE-15971 out and update the result here (maybe days later 
though due to online supporting work...). Thanks for the information [~stack]

> Performance regression observed: Empty random read(get) performance of 
> branch-1 worse than 0.98
> ---
>
> Key: HBASE-15619
> URL: https://issues.apache.org/jira/browse/HBASE-15619
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: compare.png, flamegraph-108588.098.svg, 
> flamegraph-1221.branch-1.svg, flamegraph-135684.1.1.svg
>
>
> As titled, I observed the perf regression in the final stress testing before 
> upgrading our online cluster to 1.x. More details as follows:
> 1. HBase version in the comparison test:
>   * 0.98: based on 0.98.12 with some backports, among which HBASE-11297 is 
> the most important perf-related one (especially under high stress)
>   * 1.x: checked 3 releases in total
>  1) 1.1.2 with important perf fixes/improvements including HBASE-15031 
> and HBASE-14465
>  2) 1.1.4 release
>  3) 1.2.1RC1
> 2. Test environment
> * YCSB: 0.7.0 with 
> [YCSB-651|https://github.com/brianfrankcooper/YCSB/pull/651] applied
> * Client: 4 physical nodes, each with 8 YCSB instance, each instance with 
> 100 threads
> * Server: 1 Master with 3 RS, each RS with 256 handlers and 64G heap
> * Hardware: 64-core CPU, 256GB Mem, 10Gb Net, 1 PCIe-SSD and 11 HDD, same 
> hardware for client and server
> 3. Test cases
> * -p fieldcount=1 -p fieldlength=128 -p readproportion=1
> * case #1: read against empty table
> * -case #2: lrucache 100% hit-
> * -case #3: BLOCKCACHE=>false-
> 4. Test result
> * 1.1.4 and 1.2.1 have a similar perf (less than 2% deviation) as 1.1.2+, so 
> will only paste comparison data of 0.98.12+ and 1.1.2+
> * per-RS Throughput(ops/s)
> ||HBaseVersion||case#1||-case#2-||-case#3-||
> |0.98.12+|383562|-257493-|-47594-|
> |1.1.2+|363050|-232757-|-35872-|
> * AverageLatency(us)
> ||HBaseVersion||case#1||-case#2-||-case#3-||
> |0.98.12+|2774|-4134-|-22371-|
> |1.1.2+|2930|-4572-|-29690-|
> It seems there's perf regression on RPCServer (we tried 0.98 client against 
> 1.x server and observed a similar perf to 1.x client)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15619) Performance regression observed: Empty random read(get) performance of branch-1 worse than 0.98

2016-06-16 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335276#comment-15335276
 ] 

Yu Li edited comment on HBASE-15619 at 6/17/16 2:56 AM:


Yep, the whole work is still not completely done but still something to share 
in advance:
1. HDFS-9668
- When network-card bandwidth enhanced to 10Gb, there would be much higher 
IO pressure on disk, and HDD is more frequently exhausted with 100% util. In 
this case, we found SSD read/write also affected and RT reached as high as 
~10s. HDFS-9668 is the correct way to resolve it

2. Improvements on ShortCircuitCache (SCC)
- We found some locking problem on SCC and my workmate already made a fix, 
testing done on our side but still not opening JIRA, will add the link as soon 
as JIRA created for this.


was (Author: carp84):
Yep, the whole work is still not completely done but still something to share 
in advance:
1. HDFS-9668
- When network-card bandwidth enhanced to 10Gb, there would be much higher 
IO pressure on disk, and HDD is more frequently exhausted with 100% util. In 
this case, we found SSD read/write also affected and RT reached as high as 
~10s. HDFS-9668 is the correct way to resolve it
2. Improvements on ShortCircuitCache (SCC)
- We found some locking problem on SCC and my workmate already made a fix, 
testing done on our side but still not opening JIRA, will add the link as soon 
as JIRA created for this.

> Performance regression observed: Empty random read(get) performance of 
> branch-1 worse than 0.98
> ---
>
> Key: HBASE-15619
> URL: https://issues.apache.org/jira/browse/HBASE-15619
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: compare.png, flamegraph-108588.098.svg, 
> flamegraph-1221.branch-1.svg, flamegraph-135684.1.1.svg
>
>
> As titled, I observed the perf regression in the final stress testing before 
> upgrading our online cluster to 1.x. More details as follows:
> 1. HBase version in the comparison test:
>   * 0.98: based on 0.98.12 with some backports, among which HBASE-11297 is 
> the most important perf-related one (especially under high stress)
>   * 1.x: checked 3 releases in total
>  1) 1.1.2 with important perf fixes/improvements including HBASE-15031 
> and HBASE-14465
>  2) 1.1.4 release
>  3) 1.2.1RC1
> 2. Test environment
> * YCSB: 0.7.0 with 
> [YCSB-651|https://github.com/brianfrankcooper/YCSB/pull/651] applied
> * Client: 4 physical nodes, each with 8 YCSB instance, each instance with 
> 100 threads
> * Server: 1 Master with 3 RS, each RS with 256 handlers and 64G heap
> * Hardware: 64-core CPU, 256GB Mem, 10Gb Net, 1 PCIe-SSD and 11 HDD, same 
> hardware for client and server
> 3. Test cases
> * -p fieldcount=1 -p fieldlength=128 -p readproportion=1
> * case #1: read against empty table
> * -case #2: lrucache 100% hit-
> * -case #3: BLOCKCACHE=>false-
> 4. Test result
> * 1.1.4 and 1.2.1 have a similar perf (less than 2% deviation) as 1.1.2+, so 
> will only paste comparison data of 0.98.12+ and 1.1.2+
> * per-RS Throughput(ops/s)
> ||HBaseVersion||case#1||-case#2-||-case#3-||
> |0.98.12+|383562|-257493-|-47594-|
> |1.1.2+|363050|-232757-|-35872-|
> * AverageLatency(us)
> ||HBaseVersion||case#1||-case#2-||-case#3-||
> |0.98.12+|2774|-4134-|-22371-|
> |1.1.2+|2930|-4572-|-29690-|
> It seems there's perf regression on RPCServer (we tried 0.98 client against 
> 1.x server and observed a similar perf to 1.x client)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15619) Performance regression observed: Empty random read(get) performance of branch-1 worse than 0.98

2016-06-16 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335276#comment-15335276
 ] 

Yu Li commented on HBASE-15619:
---

Yep, the whole work is still not completely done but still something to share 
in advance:
1. HDFS-9668
- When network-card bandwidth enhanced to 10Gb, there would be much higher 
IO pressure on disk, and HDD is more frequently exhausted with 100% util. In 
this case, we found SSD read/write also affected and RT reached as high as 
~10s. HDFS-9668 is the correct way to resolve it
2. Improvements on ShortCircuitCache (SCC)
- We found some locking problem on SCC and my workmate already made a fix, 
testing done on our side but still not opening JIRA, will add the link as soon 
as JIRA created for this.

> Performance regression observed: Empty random read(get) performance of 
> branch-1 worse than 0.98
> ---
>
> Key: HBASE-15619
> URL: https://issues.apache.org/jira/browse/HBASE-15619
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: compare.png, flamegraph-108588.098.svg, 
> flamegraph-1221.branch-1.svg, flamegraph-135684.1.1.svg
>
>
> As titled, I observed the perf regression in the final stress testing before 
> upgrading our online cluster to 1.x. More details as follows:
> 1. HBase version in the comparison test:
>   * 0.98: based on 0.98.12 with some backports, among which HBASE-11297 is 
> the most important perf-related one (especially under high stress)
>   * 1.x: checked 3 releases in total
>  1) 1.1.2 with important perf fixes/improvements including HBASE-15031 
> and HBASE-14465
>  2) 1.1.4 release
>  3) 1.2.1RC1
> 2. Test environment
> * YCSB: 0.7.0 with 
> [YCSB-651|https://github.com/brianfrankcooper/YCSB/pull/651] applied
> * Client: 4 physical nodes, each with 8 YCSB instance, each instance with 
> 100 threads
> * Server: 1 Master with 3 RS, each RS with 256 handlers and 64G heap
> * Hardware: 64-core CPU, 256GB Mem, 10Gb Net, 1 PCIe-SSD and 11 HDD, same 
> hardware for client and server
> 3. Test cases
> * -p fieldcount=1 -p fieldlength=128 -p readproportion=1
> * case #1: read against empty table
> * -case #2: lrucache 100% hit-
> * -case #3: BLOCKCACHE=>false-
> 4. Test result
> * 1.1.4 and 1.2.1 have a similar perf (less than 2% deviation) as 1.1.2+, so 
> will only paste comparison data of 0.98.12+ and 1.1.2+
> * per-RS Throughput(ops/s)
> ||HBaseVersion||case#1||-case#2-||-case#3-||
> |0.98.12+|383562|-257493-|-47594-|
> |1.1.2+|363050|-232757-|-35872-|
> * AverageLatency(us)
> ||HBaseVersion||case#1||-case#2-||-case#3-||
> |0.98.12+|2774|-4134-|-22371-|
> |1.1.2+|2930|-4572-|-29690-|
> It seems there's perf regression on RPCServer (we tried 0.98 client against 
> 1.x server and observed a similar perf to 1.x client)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16053) Master code is not setting the table in ENABLING state in create table

2016-06-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16053:
--
Status: Patch Available  (was: Open)

> Master code is not setting the table in ENABLING state in create table
> --
>
> Key: HBASE-16053
> URL: https://issues.apache.org/jira/browse/HBASE-16053
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: hbase-16053_v1.patch
>
>
> Unit test logs are filled with the following, because in master, unlike 
> branch-1, we are missing the code which sets the table in ENABLING mode 
> before assignment in CreateTableProcedure. 
> {code}
> 2016-06-10 17:48:15,832 ERROR 
> [B.defaultRpcServer.handler=0,queue=0,port=60448] 
> master.TableStateManager(134): Unable to get table testRegionCache state
> org.apache.hadoop.hbase.TableNotFoundException: testRegionCache
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2320)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2900)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1334)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2273)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:116)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:138)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:113)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16053) Master code is not setting the table in ENABLING state in create table

2016-06-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16053:
--
Attachment: hbase-16053_v1.patch

Simple patch. 

> Master code is not setting the table in ENABLING state in create table
> --
>
> Key: HBASE-16053
> URL: https://issues.apache.org/jira/browse/HBASE-16053
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: hbase-16053_v1.patch
>
>
> Unit test logs are filled with the following, because in master, unlike 
> branch-1, we are missing the code which sets the table in ENABLING mode 
> before assignment in CreateTableProcedure. 
> {code}
> 2016-06-10 17:48:15,832 ERROR 
> [B.defaultRpcServer.handler=0,queue=0,port=60448] 
> master.TableStateManager(134): Unable to get table testRegionCache state
> org.apache.hadoop.hbase.TableNotFoundException: testRegionCache
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2320)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2900)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1334)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2273)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:116)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:138)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:113)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16053) Master code is not setting the table in ENABLING state in create table

2016-06-16 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16053:
-

 Summary: Master code is not setting the table in ENABLING state in 
create table
 Key: HBASE-16053
 URL: https://issues.apache.org/jira/browse/HBASE-16053
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0


Unit test logs are filled with the following, because in master, unlike 
branch-1, we are missing the code which sets the table in ENABLING mode before 
assignment in CreateTableProcedure. 

{code}
2016-06-10 17:48:15,832 ERROR [B.defaultRpcServer.handler=0,queue=0,port=60448] 
master.TableStateManager(134): Unable to get table testRegionCache state
org.apache.hadoop.hbase.TableNotFoundException: testRegionCache
at 
org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:174)
at 
org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:131)
at 
org.apache.hadoop.hbase.master.AssignmentManager.onRegionOpen(AssignmentManager.java:2320)
at 
org.apache.hadoop.hbase.master.AssignmentManager.onRegionTransition(AssignmentManager.java:2900)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.reportRegionStateTransition(MasterRpcServices.java:1334)
at 
org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8623)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2273)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:116)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:138)
at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:113)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16048) Consider tagging InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC)

2016-06-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16048:
---
Status: Patch Available  (was: Open)

> Consider tagging InternalScanner with 
> LimitedPrivate(HBaseInterfaceAudience.COPROC) 
> 
>
> Key: HBASE-16048
> URL: https://issues.apache.org/jira/browse/HBASE-16048
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Attachments: 16048.v1.txt
>
>
> Some methods (preCompact, preCompactScannerOpen, preFlush, 
> preFlushScannerOpen, etc) of BaseRegionObserver take InternalScanner as input 
> argument as well as the return type.
> BaseRegionObserver is tagged with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) but 
> InternalScanner is tagged with @InterfaceAudience.Private.
> This JIRA is to discuss tagging InternalScanner with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-16 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335240#comment-15335240
 ] 

Josh Elser commented on HBASE-5291:
---

bq. I see the config property issue was fixed with the addendum patch

Yep, you got it.

bq. It doesn't look like you have sub directories where this is called in the 
tests though.

It's called on a parent directory, but you're right in that there are no 
directories contained in that directory.

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 5291-addendum.2, HBASE-5291-addendum.patch, 
> HBASE-5291.001.patch, HBASE-5291.002.patch, HBASE-5291.003.patch, 
> HBASE-5291.004.patch, HBASE-5291.005-0.98.patch, 
> HBASE-5291.005-branch-1.patch, HBASE-5291.005.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16047) TestFastFail is broken again

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335209#comment-15335209
 ] 

Hudson commented on HBASE-16047:


FAILURE: Integrated in HBase-1.4 #219 (See 
[https://builds.apache.org/job/HBase-1.4/219/])
HBASE-16047 TestFastFail is broken again (antonov: rev 
560bf74884faea14a8d97d2f67c7c9be95918ada)
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFastFail.java


> TestFastFail is broken again
> 
>
> Key: HBASE-16047
> URL: https://issues.apache.org/jira/browse/HBASE-16047
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-16047.v1.patch
>
>
> As found by [~appy] here - http://hbase.x10host.com/flaky-tests/
> Has been failing since 
> https://builds.apache.org/job/HBase-Flaky-Tests/1294/#showFailuresLink,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16045) endtime argument for VerifyReplication was incorrectly specified in usage

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335210#comment-15335210
 ] 

Hudson commented on HBASE-16045:


FAILURE: Integrated in HBase-1.4 #219 (See 
[https://builds.apache.org/job/HBase-1.4/219/])
HBASE-16045 endtime argument for VerifyReplication was incorrectly (tedyu: rev 
4c1db3cb03683eef735c9b3a1623beecaa88db43)
* src/main/asciidoc/_chapters/ops_mgt.adoc
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java


> endtime argument for VerifyReplication was incorrectly specified in usage
> -
>
> Key: HBASE-16045
> URL: https://issues.apache.org/jira/browse/HBASE-16045
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16045.v1.txt, 16045.v2.txt, 16045.v3.txt
>
>
> Working on a customer case where the following was given for verifyrep:
> {code}
> --starttime=145679040 \ 
> --stoptime=145687680
> {code}
> Customer complained that the timestamp of a (sample) row reported as 
> ONLY_IN_PEER_TABLE_ROWS corresponded to time outside the time range.
> The code says:
> {code}
> final String endTimeArgKey = "--endtime=";
> {code}
> It turns out that usage String was wrong:
> {code}
> System.err.println("Usage: verifyrep [--starttime=X]" +
> " [--stoptime=Y] [--families=A]  ");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16047) TestFastFail is broken again

2016-06-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335194#comment-15335194
 ] 

Hudson commented on HBASE-16047:


FAILURE: Integrated in HBase-1.3 #741 (See 
[https://builds.apache.org/job/HBase-1.3/741/])
HBASE-16047 TestFastFail is broken again (antonov: rev 
9fde544b02794f3f7e36011ce38f7f1debea2edb)
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFastFail.java


> TestFastFail is broken again
> 
>
> Key: HBASE-16047
> URL: https://issues.apache.org/jira/browse/HBASE-16047
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-16047.v1.patch
>
>
> As found by [~appy] here - http://hbase.x10host.com/flaky-tests/
> Has been failing since 
> https://builds.apache.org/job/HBase-Flaky-Tests/1294/#showFailuresLink,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-16 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14743:
--
Attachment: HBASE-14743.009.rw3.patch

Metrics name changes to "Memory"

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.001.patch, HBASE-14743.002.patch, 
> HBASE-14743.003.patch, HBASE-14743.004.patch, HBASE-14743.005.patch, 
> HBASE-14743.006.patch, HBASE-14743.007.patch, HBASE-14743.008.patch, 
> HBASE-14743.009.patch, HBASE-14743.009.rw3.patch, HBASE-14743.009.v2.patch, 
> Screen Shot 2016-06-16 at 5.39.13 PM.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-16 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335146#comment-15335146
 ] 

Reid Chan commented on HBASE-14743:
---

Forgot this part, I rename it now, please wait a moment.

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.001.patch, HBASE-14743.002.patch, 
> HBASE-14743.003.patch, HBASE-14743.004.patch, HBASE-14743.005.patch, 
> HBASE-14743.006.patch, HBASE-14743.007.patch, HBASE-14743.008.patch, 
> HBASE-14743.009.patch, HBASE-14743.009.v2.patch, Screen Shot 2016-06-16 at 
> 5.39.13 PM.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15982) Interface ReplicationEndpoint extends Guava's Service

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335147#comment-15335147
 ] 

Andrew Purtell commented on HBASE-15982:


How tolerant are we of a one time change in the inheritance hierarchy of 
LP(coproc) interface ReplicationEndpoint ?

No problem to commit to branch-1 and master. For 1.3 it's possible, depends if 
[~mantonov] is still accepting latecomers. For 0.98 we can also do it as each 
0.98.x is a minor release. Branches 1.2 ([~busbey]) and 1.1 ([~ndimiduk]) would 
be out, I presume. 

> Interface ReplicationEndpoint extends Guava's Service
> -
>
> Key: HBASE-15982
> URL: https://issues.apache.org/jira/browse/HBASE-15982
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
>
> We have Guava's Service leaking into the LimitedPrivate interface 
> ReplicationEndpoint:
> {code}
> public interface ReplicationEndpoint extends Service, 
> ReplicationPeerConfigListener
> {code}
> This required a private patch when I updated Guava for our internal 
> deployments. This is going to be a problem for us for long term maintenance 
> and implenters of pluggable replication endpoints. LP is only less than 
> public by a degree. We shouldn't leak types from third part code into either 
> Public or LP APIs in my opinion. Let's fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-16 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14743:
--
Comment: was deleted

(was: yes)

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.001.patch, HBASE-14743.002.patch, 
> HBASE-14743.003.patch, HBASE-14743.004.patch, HBASE-14743.005.patch, 
> HBASE-14743.006.patch, HBASE-14743.007.patch, HBASE-14743.008.patch, 
> HBASE-14743.009.patch, HBASE-14743.009.v2.patch, Screen Shot 2016-06-16 at 
> 5.39.13 PM.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15984) Given failure to parse a given WAL that was closed cleanly, replay the WAL.

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335145#comment-15335145
 ] 

Andrew Purtell commented on HBASE-15984:


+1
The suggestion to add metrics for this is very good, can we do that?

> Given failure to parse a given WAL that was closed cleanly, replay the WAL.
> ---
>
> Key: HBASE-15984
> URL: https://issues.apache.org/jira/browse/HBASE-15984
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.1.6, 0.98.21, 1.2.3
>
> Attachments: HBASE-15984.1.patch
>
>
> subtask for a general work around for "underlying reader failed / is in a bad 
> state" just for the case where a WAL 1) was closed cleanly and 2) we can tell 
> that our current offset ought not be the end of parseable entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-16 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335144#comment-15335144
 ] 

Reid Chan commented on HBASE-14743:
---

yes

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.001.patch, HBASE-14743.002.patch, 
> HBASE-14743.003.patch, HBASE-14743.004.patch, HBASE-14743.005.patch, 
> HBASE-14743.006.patch, HBASE-14743.007.patch, HBASE-14743.008.patch, 
> HBASE-14743.009.patch, HBASE-14743.009.v2.patch, Screen Shot 2016-06-16 at 
> 5.39.13 PM.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15992) Preserve original KeeperException when converted to external exceptions

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335140#comment-15335140
 ] 

Andrew Purtell commented on HBASE-15992:


We will be releasing 0.98.21 in about a month. What do you think [~haridsv] ? 

> Preserve original KeeperException when converted to external exceptions
> ---
>
> Key: HBASE-15992
> URL: https://issues.apache.org/jira/browse/HBASE-15992
> Project: HBase
>  Issue Type: Brainstorming
>  Components: hbase
>Affects Versions: 0.98.14
>Reporter: Hari Krishna Dara
>Priority: Minor
>  Labels: client, client-auth, zookeeper
>
> During an investigation in which we were seeing unexpected 
> {{NoServerForRegionException}} errors, the root cause turned out to be a 
> {{KeeperException}} that got lost and so resulted in a misleading top level 
> indication.
> The underlying exception with partial stacktrace is this:
> {noformat}
> org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
> AuthFailed for /hbase/meta-region-server
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1289)
>   at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
>   at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:684)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.blockUntilAvailable(ZKUtil.java:2032)
>   at 
> org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:203)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:58)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateMeta(HConnectionManager.java:1209)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1175)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1301)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1178)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1135)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:976)
> {noformat}
> Here is some additional information:
> * The exception first gets caught 
> [here|https://github.com/apache/hbase/blob/rel/0.98.14/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L366]
> * It gets logged and rethrown from 
> [here|https://github.com/apache/hbase/blob/rel/0.98.14/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java#L279]
> * It gets caught again, logged and rethrown 
> [here|https://github.com/apache/hbase/blob/rel/0.98.14/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java#L693]
> * This finally gets caught and rethrown as InterruptedException 
> [here|https://github.com/apache/hbase/blob/rel/0.98.14/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java#L2037]
> When thrown as {{InterruptedException}}, the cause is lost, so [the code 
> catching 
> it|https://github.com/apache/hbase/blob/rel/0.98.14/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperRegistry.java#L65]
>  can't (and currently doesn't) determine the cause. Perhaps the exception 
> should be preserved and passed on to [the 
> caller|https://github.com/apache/hbase/blob/rel/0.98.14/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java#L1312]
>  such that it is available when finally the {{NoServerForRegionException}} is 
> thrown 
> [here|https://github.com/apache/hbase/blob/rel/0.98.14/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java#L1281].
>  Alternatively, a more meaningful exception could also be thrown instead of a 
> misleading {{NoServerForRegionException}}, especially in cases where the 
> failure indicates a more permanent condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15130) Backport 0.98 Scan different TimeRange for each column family

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335136#comment-15335136
 ] 

Andrew Purtell edited comment on HBASE-15130 at 6/17/16 1:07 AM:
-

Want to revisit this for 0.98.21 [~churromorales] ? Please consider rebasing 
against the latest 0.98. Hopefully there won't be significant fixups. Then I'll 
test and, hopefully, commit it. 


was (Author: apurtell):
Want to revisit this for 0.98.21 [~churromorales] ? 

> Backport 0.98 Scan different TimeRange for each column family 
> --
>
> Key: HBASE-15130
> URL: https://issues.apache.org/jira/browse/HBASE-15130
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver, Scanners
>Affects Versions: 0.98.17
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 0.98.21
>
> Attachments: HBASE-15130-0.98.patch, HBASE-15130-0.98.v1.patch, 
> HBASE-15130-0.98.v1.patch, HBASE-15130-0.98.v2.patch, 
> HBASE-15130-0.98.v3.patch, HBASE-15130-0.98.v4.patch
>
>
> branch 98 version backport for HBASE-14355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15816) Provide client with ability to set priority on Operations

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335137#comment-15335137
 ] 

Andrew Purtell commented on HBASE-15816:


bq. Thanks for taking a look and I assume there is some interest and that I 
should go ahead with the other patch for annotation driven priority removal.

Yes

> Provide client with ability to set priority on Operations 
> --
>
> Key: HBASE-15816
> URL: https://issues.apache.org/jira/browse/HBASE-15816
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HBASE-15816-v1.patch, HBASE-15816.patch
>
>
> First round will just be to expose the ability to set priorities for client 
> operations.  For more background: 
> http://mail-archives.apache.org/mod_mbox/hbase-dev/201604.mbox/%3CCA+RK=_BG_o=q8HMptcP2WauAinmEsL+15f3YEJuz=qbpcya...@mail.gmail.com%3E
> Next step would be to remove AnnotationReadingPriorityFunction and have the 
> client send priorities explicitly.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15130) Backport 0.98 Scan different TimeRange for each column family

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335136#comment-15335136
 ] 

Andrew Purtell commented on HBASE-15130:


Want to revisit this for 0.98.21 [~churromorales] ? 

> Backport 0.98 Scan different TimeRange for each column family 
> --
>
> Key: HBASE-15130
> URL: https://issues.apache.org/jira/browse/HBASE-15130
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver, Scanners
>Affects Versions: 0.98.17
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 0.98.21
>
> Attachments: HBASE-15130-0.98.patch, HBASE-15130-0.98.v1.patch, 
> HBASE-15130-0.98.v1.patch, HBASE-15130-0.98.v2.patch, 
> HBASE-15130-0.98.v3.patch, HBASE-15130-0.98.v4.patch
>
>
> branch 98 version backport for HBASE-14355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13336) Consistent rules for security meta table protections

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13336:
---
Fix Version/s: (was: 0.98.21)
   (was: 1.3.0)

> Consistent rules for security meta table protections
> 
>
> Key: HBASE-13336
> URL: https://issues.apache.org/jira/browse/HBASE-13336
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Mikhail Antonov
> Fix For: 2.0.0
>
> Attachments: HBASE-13336.patch, HBASE-13336_v2.patch
>
>
> The AccessController and VisibilityController do different things regarding 
> protecting their meta tables. The AC allows schema changes and disable/enable 
> if the user has permission. The VC unconditionally disallows all admin 
> actions. Generally, bad things will happen if these meta tables are damaged, 
> disabled, or dropped. The likely outcome is random frequent (or constant) 
> server side op failures with nasty stack traces. On the other hand some 
> things like column family and table attribute changes can have valid use 
> cases. We should have consistent and sensible rules for protecting security 
> meta tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13336) Consistent rules for security meta table protections

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335135#comment-15335135
 ] 

Andrew Purtell commented on HBASE-13336:


No movement here for a long time. Resolve as Incomplete or Wont Fix [~mantonov] 
? 

> Consistent rules for security meta table protections
> 
>
> Key: HBASE-13336
> URL: https://issues.apache.org/jira/browse/HBASE-13336
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Mikhail Antonov
> Fix For: 2.0.0
>
> Attachments: HBASE-13336.patch, HBASE-13336_v2.patch
>
>
> The AccessController and VisibilityController do different things regarding 
> protecting their meta tables. The AC allows schema changes and disable/enable 
> if the user has permission. The VC unconditionally disallows all admin 
> actions. Generally, bad things will happen if these meta tables are damaged, 
> disabled, or dropped. The likely outcome is random frequent (or constant) 
> server side op failures with nasty stack traces. On the other hand some 
> things like column family and table attribute changes can have valid use 
> cases. We should have consistent and sensible rules for protecting security 
> meta tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13503) Encryption improvements

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13503:
---
Fix Version/s: 2.0.0

> Encryption improvements
> ---
>
> Key: HBASE-13503
> URL: https://issues.apache.org/jira/browse/HBASE-13503
> Project: HBase
>  Issue Type: Umbrella
>  Components: encryption, security
>Reporter: Andrew Purtell
> Fix For: 2.0.0
>
>
> Umbrella issue for a collection of related encryption improvements:
> - Additional ciphers
> - Better data key derivation
> - Support master key rotation without process restarts (where applicable)
> - Additional KeyProviders with backwards compatible interface evolution



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13504) Alias current AES cipher as AES-CTR

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13504:
---
Assignee: (was: Andrew Purtell)

> Alias current AES cipher as AES-CTR
> ---
>
> Key: HBASE-13504
> URL: https://issues.apache.org/jira/browse/HBASE-13504
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
>
> Alias the current cipher with the name "AES" to the name "AES-CTR".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13506) AES-GCM cipher support where available

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13506:
---
Assignee: (was: Andrew Purtell)

> AES-GCM cipher support where available
> --
>
> Key: HBASE-13506
> URL: https://issues.apache.org/jira/browse/HBASE-13506
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
> Fix For: 2.0.0
>
>
> The initial encryption drop only had AES-CTR support because authenticated 
> modes such as GCM are only available in Java 7 and up, and our trunk at the 
> time was targeted at Java 6. However we can optionally use AES-GCM cipher 
> support where available. For HBase 1.0 and up, Java 7 is now the minimum so 
> use of AES-GCM can go in directly. It's probably possible to add support in 
> 0.98 too using reflection for cipher object initialization. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13505) Deprecate the "AES" cipher type

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13505:
---
Assignee: (was: Andrew Purtell)

> Deprecate the "AES" cipher type
> ---
>
> Key: HBASE-13505
> URL: https://issues.apache.org/jira/browse/HBASE-13505
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
>
> Deprecate the "AES" cipher type. Remove internal references to it and use the 
> "AES-CTR" name instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13505) Deprecate the "AES" cipher type

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13505:
---
Fix Version/s: (was: 0.98.21)
   (was: 1.3.0)

> Deprecate the "AES" cipher type
> ---
>
> Key: HBASE-13505
> URL: https://issues.apache.org/jira/browse/HBASE-13505
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
>
> Deprecate the "AES" cipher type. Remove internal references to it and use the 
> "AES-CTR" name instead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13511) Derive data keys with HKDF

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13511:
---
Fix Version/s: (was: 0.98.21)
   (was: 1.3.0)

> Derive data keys with HKDF
> --
>
> Key: HBASE-13511
> URL: https://issues.apache.org/jira/browse/HBASE-13511
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
>
> When we are locally managing master key material, when users have supplied 
> their own data key material, derive the actual data keys using HKDF 
> (https://tools.ietf.org/html/rfc5869)
> DK' = HKDF(S, DK, MK)
> where
> S = salt
> DK = user supplied data key
> MK = master key
> DK' = derived data key for the HFile
> User supplied key material may be weak or an attacker may have some partial 
> knowledge of it.
> Where we generate random data keys we can still use HKDF as a way to mix more 
> entropy into the secure random generator. 
> DK' = HKDF(R, MK)
> where
> R = random key material drawn from the system's secure random generator
> MK = master key
> (Salting isn't useful here because salt S and R would be drawn from the same 
> pool, so will not have statistical independence.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13504) Alias current AES cipher as AES-CTR

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13504:
---
Fix Version/s: (was: 0.98.21)
   (was: 1.3.0)

> Alias current AES cipher as AES-CTR
> ---
>
> Key: HBASE-13504
> URL: https://issues.apache.org/jira/browse/HBASE-13504
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
>
> Alias the current cipher with the name "AES" to the name "AES-CTR".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13511) Derive data keys with HKDF

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13511:
---
Assignee: (was: Andrew Purtell)

> Derive data keys with HKDF
> --
>
> Key: HBASE-13511
> URL: https://issues.apache.org/jira/browse/HBASE-13511
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
>
> When we are locally managing master key material, when users have supplied 
> their own data key material, derive the actual data keys using HKDF 
> (https://tools.ietf.org/html/rfc5869)
> DK' = HKDF(S, DK, MK)
> where
> S = salt
> DK = user supplied data key
> MK = master key
> DK' = derived data key for the HFile
> User supplied key material may be weak or an attacker may have some partial 
> knowledge of it.
> Where we generate random data keys we can still use HKDF as a way to mix more 
> entropy into the secure random generator. 
> DK' = HKDF(R, MK)
> where
> R = random key material drawn from the system's secure random generator
> MK = master key
> (Salting isn't useful here because salt S and R would be drawn from the same 
> pool, so will not have statistical independence.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16047) TestFastFail is broken again

2016-06-16 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335131#comment-15335131
 ] 

Appy commented on HBASE-16047:
--

Btw [~mantonov], TestFastFail will get clean chit from flaky infra once it 
passes sufficient number of times (which is set to 50 right now).

> TestFastFail is broken again
> 
>
> Key: HBASE-16047
> URL: https://issues.apache.org/jira/browse/HBASE-16047
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-16047.v1.patch
>
>
> As found by [~appy] here - http://hbase.x10host.com/flaky-tests/
> Has been failing since 
> https://builds.apache.org/job/HBase-Flaky-Tests/1294/#showFailuresLink,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14177) Full GC on client may lead to missing scan results

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-14177.

   Resolution: Won't Fix
Fix Version/s: (was: 0.98.21)

No update here for a long time. 

Branch-1.0 is EOL. 

While the problem still exists in 0.98, there's no active work in progress. 
Reopen if that changes. Also serves as motivation to move off 0.98 to a more 
recent version. 

> Full GC on client may lead to missing scan results
> --
>
> Key: HBASE-14177
> URL: https://issues.apache.org/jira/browse/HBASE-14177
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.12, 0.98.13, 1.0.2
>Reporter: James Estes
>Priority: Critical
>  Labels: dataloss
>
> After adding a large row, scanning back that row winds up being empty. After 
> a few attempts it will succeed (all attempts over the same data on an hbase 
> getting no other writes).
> Looking at logs, it seems this happens when there is memory pressure on the 
> client and there are several Full GCs that happen. Then messages that 
> indicate that region locations are being removed from the local client cache:
> 2015-07-31 12:50:24,647 [main] DEBUG 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation  
> - Removed 192.168.1.131:50981 as a location of 
> big_row_1438368609944,,1438368610048.880c849594807bdc7412f4f982337d6c. for 
> tableName=big_row_1438368609944 from cache
> Blaming the GC may sound fanciful, but if the test is run with -Xms4g -Xmx4g 
> then it always passes on the first scan attempt. Maybe the pause is enough to 
> remove something from the cache, or the client is using weak references 
> somewhere?
> More info 
> http://mail-archives.apache.org/mod_mbox/hbase-user/201507.mbox/%3CCAE8tVdnFf%3Dob569%3DfJkpw1ndVWOVTkihYj9eo6qt0FrzihYHgw%40mail.gmail.com%3E
> Test used to reproduce:
> https://github.com/housejester/hbase-debugging#fullgctest
> I tested and had failures in:
> 0.98.12 client/server
> 0.98.13 client 0.98.12 server
> 0.98.13 client/server
> 1.1.0 client 0.98.13 server
> 0.98.13 client and 1.1.0 server
> 0.98.12 client and 1.1.0 server
> I tested without failure in:
> 1.1.0 client/server



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13096) NPE from SecureWALCellCodec$EncryptedKvEncoder#write when using WAL encryption and Phoenix secondary indexes

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-13096.

   Resolution: Cannot Reproduce
 Assignee: (was: Andrew Purtell)
Fix Version/s: (was: 0.98.21)

Closing as Cannot Reproduce. If someone is seeing this and can post a 
reproduction of the issue we can reopen it. 

> NPE from SecureWALCellCodec$EncryptedKvEncoder#write when using WAL 
> encryption and Phoenix secondary indexes
> 
>
> Key: HBASE-13096
> URL: https://issues.apache.org/jira/browse/HBASE-13096
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6
>Reporter: Andrew Purtell
>  Labels: phoenix
>
> On user@phoenix Dhavi Rami reported:
> {quote}
> I tried using phoenix in hBase with Transparent Encryption of Data At Rest 
> enabled ( AES encryption) 
> Works fine for a table with primary key column.
> But it doesn't work if I create Secondary index on that tables.I tried to dig 
> deep into the problem and found WAL file encryption throws exception when I 
> have Global Secondary Index created on my mutable table.
> Following is the error I was getting on one of the region server.
> {noformat}
> 2015-02-20 10:44:48,768 ERROR 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: UNEXPECTED
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:767)
> at org.apache.hadoop.hbase.util.Bytes.toInt(Bytes.java:754)
> at org.apache.hadoop.hbase.KeyValue.getKeyLength(KeyValue.java:1253)
> at 
> org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec$EncryptedKvEncoder.write(SecureWALCellCodec.java:194)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.append(ProtobufLogWriter.java:117)
> at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$AsyncWriter.run(FSHLog.java:1137)
> at java.lang.Thread.run(Thread.java:745)
> 2015-02-20 10:44:48,776 INFO org.apache.hadoop.hbase.regionserver.wal.FSHLog: 
> regionserver60020-WAL.AsyncWriter exiting
> {noformat}
> I had to disable WAL encryption, and it started working fine with secondary 
> Index. So Hfile encryption works with secondary index but WAL encryption 
> doesn't work.
> {quote}
> Parking this here for later investigation. For now I'm going to assume this 
> is something in SecureWALCellCodec that needs looking at, but if it turns out 
> to be a Phoenix indexer issue I will move this JIRA there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14927) Backport HBASE-13014 and HBASE-14749 to branch-1 and 0.98

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14927:
---
Fix Version/s: (was: 1.3.0)
   1.4.0

> Backport HBASE-13014 and HBASE-14749 to branch-1 and 0.98
> -
>
> Key: HBASE-14927
> URL: https://issues.apache.org/jira/browse/HBASE-14927
> Project: HBase
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 1.4.0, 0.98.21
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16048) Consider tagging InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC)

2016-06-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16048:
---
Attachment: 16048.v1.txt

> Consider tagging InternalScanner with 
> LimitedPrivate(HBaseInterfaceAudience.COPROC) 
> 
>
> Key: HBASE-16048
> URL: https://issues.apache.org/jira/browse/HBASE-16048
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Attachments: 16048.v1.txt
>
>
> Some methods (preCompact, preCompactScannerOpen, preFlush, 
> preFlushScannerOpen, etc) of BaseRegionObserver take InternalScanner as input 
> argument as well as the return type.
> BaseRegionObserver is tagged with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) but 
> InternalScanner is tagged with @InterfaceAudience.Private.
> This JIRA is to discuss tagging InternalScanner with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14259) Backport Namespace quota support to 98 branch

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-14259.

   Resolution: Won't Fix
 Assignee: (was: Andrew Purtell)
Fix Version/s: (was: 0.98.21)

Closing as Wont Fix. We can reopen if a new sponsor for the change steps 
forward and sufficiently justifies it.

> Backport Namespace quota support to 98 branch 
> --
>
> Key: HBASE-14259
> URL: https://issues.apache.org/jira/browse/HBASE-14259
> Project: HBase
>  Issue Type: Task
>Reporter: Vandana Ayyalasomayajula
> Attachments: HBASE-14259_v1_0.98.patch, HBASE-14259_v2_0.98.patch, 
> HBASE-14259_v3_0.98.patch
>
>
> Namespace quota support (HBASE-8410) has been backported to branch-1 
> (HBASE-13438). This jira would backport the same to 98 branch. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14049) SnapshotHFileCleaner should optionally clean up after failed snapshots

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14049:
---
Fix Version/s: (was: 1.3.0)
   1.4.0

> SnapshotHFileCleaner should optionally clean up after failed snapshots
> --
>
> Key: HBASE-14049
> URL: https://issues.apache.org/jira/browse/HBASE-14049
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.13
>Reporter: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 0.98.21
>
>
> SnapshotHFileCleaner should optionally clean up after failed snapshots rather 
> than just complain. Add a configuration option that, if set to true 
> (defaulting to false), instructs SnapshotHFileCleaner to recursively remove 
> failed snapshot temporary directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13667) Backport HBASE-12975 to 1.0 and 0.98 without changing coprocessors hooks

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-13667.

   Resolution: Won't Fix
 Assignee: (was: Rajeshbabu Chintaguntla)
Fix Version/s: (was: 0.98.21)

No update here for a long time. Closing as Wont Fix

> Backport HBASE-12975 to 1.0 and 0.98 without changing coprocessors hooks
> 
>
> Key: HBASE-13667
> URL: https://issues.apache.org/jira/browse/HBASE-13667
> Project: HBase
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>
> We can backport Split transaction, region merge transaction interfaces to 
> branch 1.0 and 0.98 without changing coprocessor hooks. Then it should be 
> compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16047) TestFastFail is broken again

2016-06-16 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335123#comment-15335123
 ] 

Appy commented on HBASE-16047:
--

Thanks [~mantonov] for the quick fix.
[~stack] I think the new executor can use a set of corresponding tests. For eg. 
the expected value for this test would be 0. What do you say?

> TestFastFail is broken again
> 
>
> Key: HBASE-16047
> URL: https://issues.apache.org/jira/browse/HBASE-16047
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-16047.v1.patch
>
>
> As found by [~appy] here - http://hbase.x10host.com/flaky-tests/
> Has been failing since 
> https://builds.apache.org/job/HBase-Flaky-Tests/1294/#showFailuresLink,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14289) Backport HBASE-13965 'Stochastic Load Balancer JMX Metrics' to 0.98

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-14289.

   Resolution: Won't Fix
 Assignee: (was: Ted Yu)
Fix Version/s: (was: 0.98.21)

No updates here for a long time. Closing as Wont Fix.

> Backport HBASE-13965 'Stochastic Load Balancer JMX Metrics' to 0.98
> ---
>
> Key: HBASE-14289
> URL: https://issues.apache.org/jira/browse/HBASE-14289
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Attachments: 14289-0.98-v2.txt, 14289-0.98-v3.txt, 14289-0.98-v4.txt, 
> 14289-0.98-v5.txt
>
>
> The default HBase load balancer (the Stochastic load balancer) is cost 
> function based. The cost function weights are tunable but no visibility into 
> those cost function results is directly provided.
> This issue backports HBASE-13965 to 0.98 branch to provide visibility via JMX 
> into each cost function of the stochastic load balancer, as well as the 
> overall cost of the balancing plan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13506) AES-GCM cipher support where available

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13506:
---
Fix Version/s: (was: 0.98.21)
   (was: 1.3.0)

> AES-GCM cipher support where available
> --
>
> Key: HBASE-13506
> URL: https://issues.apache.org/jira/browse/HBASE-13506
> Project: HBase
>  Issue Type: Sub-task
>  Components: encryption, security
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0
>
>
> The initial encryption drop only had AES-CTR support because authenticated 
> modes such as GCM are only available in Java 7 and up, and our trunk at the 
> time was targeted at Java 6. However we can optionally use AES-GCM cipher 
> support where available. For HBase 1.0 and up, Java 7 is now the minimum so 
> use of AES-GCM can go in directly. It's probably possible to add support in 
> 0.98 too using reflection for cipher object initialization. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15979) replication_admin_test.rb fails in 0.98 branch

2016-06-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-15979:
---
Fix Version/s: (was: 0.98.21)

> replication_admin_test.rb fails in 0.98 branch
> --
>
> Key: HBASE-15979
> URL: https://issues.apache.org/jira/browse/HBASE-15979
> Project: HBase
>  Issue Type: Test
>Affects Versions: 0.98.19
>Reporter: Ted Yu
>Priority: Minor
> Attachments: TestShell-output.txt
>
>
> From 
> https://builds.apache.org/job/HBase-0.98-matrix/jdk=latest1.7,label=yahoo-not-h2/352/testReport/junit/org.apache.hadoop.hbase.client/TestShell/testRunShellTests/
>  :
> {code}
>   1) Error:
> test_add_peer:_multiple_zk_cluster_key_-_peer_config(Hbase::ReplicationAdminTest):
> NativeException: org.apache.hadoop.hbase.replication.ReplicationException: 
> Could not remove peer with id=1
> org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java:144:in 
> `removePeer'
> org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java:229:in 
> `removePeer'
> 
> /home/jenkins/jenkins-slave/workspace/HBase-0.98-matrix/jdk/latest1.7/label/yahoo-not-h2/hbase-shell/src/main/ruby/hbase/replication_admin.rb:102:in
>  `remove_peer'
> ./src/test/ruby/hbase/replication_admin_test.rb:140:in 
> `test_add_peer:_multiple_zk_cluster_key_-_peer_config'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
> {code}
> The above led to assertion failures for subsequent tests:
> {code}
> 2) Failure:
> test_add_peer:_multiple_zk_cluster_key_-_peer_config(Hbase::ReplicationAdminTest)
> [./src/test/ruby/hbase/replication_admin_test.rb:41:in `teardown'
>  org/jruby/RubyArray.java:1620:in `each'
>  org/jruby/RubyArray.java:1620:in `each']:
> <0> expected but was
> <1>.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14277) TestRegionServerHostname.testRegionServerHostname may fail at host with a case sensitive name

2016-06-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335108#comment-15335108
 ] 

Mikhail Antonov commented on HBASE-14277:
-

Don't see it pushed to branch-1 (and branch-1.3). Going to push it there. 
[~liushaohui]

> TestRegionServerHostname.testRegionServerHostname may fail at host with a 
> case sensitive name
> -
>
> Key: HBASE-14277
> URL: https://issues.apache.org/jira/browse/HBASE-14277
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.1.3
>
> Attachments: HBASE-14277-v001.diff, HBASE-14277-v002.diff
>
>
> After HBASE-13995, hostname will be converted to lower case in ServerName. It 
> may cause the test: TestRegionServerHostname.testRegionServerHostname failed 
> at host with a case sensitive name.
> Just fix it in test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16052) Improve HBaseFsck Scalability

2016-06-16 Thread Ben Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Lau updated HBASE-16052:

Attachment: HBASE-16052-master.patch

> Improve HBaseFsck Scalability
> -
>
> Key: HBASE-16052
> URL: https://issues.apache.org/jira/browse/HBASE-16052
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Reporter: Ben Lau
> Attachments: HBASE-16052-master.patch
>
>
> There are some problems with HBaseFsck that make it unnecessarily slow 
> especially for large tables or clusters with many regions.  
> This patch tries to fix the biggest bottlenecks and also include a couple of 
> bug fixes for some of the race conditions caused by gathering and holding 
> state about a live cluster that is no longer true by the time you use that 
> state in Fsck processing.  These race conditions cause Fsck to crash and 
> become unusable on large clusters with lots of region splits/merges.
> Here are some scalability/performance problems in HBaseFsck and the changes 
> the patch makes:
> - Unnecessary I/O and RPCs caused by fetching an array of FileStatuses and 
> then discarding everything but the Paths, then passing the Paths to a 
> PathFilter, and then having the filter look up the (previously discarded) 
> FileStatuses of the paths again.  This is actually worse than double I/O 
> because the first lookup obtains a batch of FileStatuses while all the other 
> lookups are individual RPCs performed sequentially.
> -- Avoid this by adding a FileStatusFilter so that filtering can happen 
> directly on FileStatuses
> -- This performance bug affects more than Fsck, but also to some extent 
> things like snapshots, hfile archival, etc.  I didn't have time to look too 
> deep into other things affected and didn't want to increase the scope of this 
> ticket so I focus mostly on Fsck and make only a few improvements to other 
> codepaths.  The changes in this patch though should make it fairly easy to 
> fix other code paths in later jiras if we feel there are some other features 
> strongly impacted by this problem.  
> - OfflineReferenceFileRepair is the most expensive part of Fsck (often 50% of 
> Fsck runtime) and the running time scales with the number of store files, yet 
> the function is completely serial
> -- Make offlineReferenceFileRepair multithreaded
> - LoadHdfsRegionDirs() uses table-level concurrency, which is a big 
> bottleneck if you have 1 large cluster with 1 very large table that has 
> nearly all the regions
> -- Change loadHdfsRegionDirs() to region-level parallelism instead of 
> table-level parallelism for operations.
> The changes benefit all clusters but are especially noticeable for large 
> clusters with a few very large tables.  On our version of 0.98 with the 
> original patch we had a moderately sized production cluster with 2 (user) 
> tables and ~160k regions where HBaseFsck went from taking 18 min to 5 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16052) Improve HBaseFsck Scalability

2016-06-16 Thread Ben Lau (JIRA)
Ben Lau created HBASE-16052:
---

 Summary: Improve HBaseFsck Scalability
 Key: HBASE-16052
 URL: https://issues.apache.org/jira/browse/HBASE-16052
 Project: HBase
  Issue Type: Improvement
  Components: hbck
Reporter: Ben Lau


There are some problems with HBaseFsck that make it unnecessarily slow 
especially for large tables or clusters with many regions.  

This patch tries to fix the biggest bottlenecks and also include a couple of 
bug fixes for some of the race conditions caused by gathering and holding state 
about a live cluster that is no longer true by the time you use that state in 
Fsck processing.  These race conditions cause Fsck to crash and become unusable 
on large clusters with lots of region splits/merges.

Here are some scalability/performance problems in HBaseFsck and the changes the 
patch makes:
- Unnecessary I/O and RPCs caused by fetching an array of FileStatuses and then 
discarding everything but the Paths, then passing the Paths to a PathFilter, 
and then having the filter look up the (previously discarded) FileStatuses of 
the paths again.  This is actually worse than double I/O because the first 
lookup obtains a batch of FileStatuses while all the other lookups are 
individual RPCs performed sequentially.
-- Avoid this by adding a FileStatusFilter so that filtering can happen 
directly on FileStatuses
-- This performance bug affects more than Fsck, but also to some extent things 
like snapshots, hfile archival, etc.  I didn't have time to look too deep into 
other things affected and didn't want to increase the scope of this ticket so I 
focus mostly on Fsck and make only a few improvements to other codepaths.  The 
changes in this patch though should make it fairly easy to fix other code paths 
in later jiras if we feel there are some other features strongly impacted by 
this problem.  
- OfflineReferenceFileRepair is the most expensive part of Fsck (often 50% of 
Fsck runtime) and the running time scales with the number of store files, yet 
the function is completely serial
-- Make offlineReferenceFileRepair multithreaded
- LoadHdfsRegionDirs() uses table-level concurrency, which is a big bottleneck 
if you have 1 large cluster with 1 very large table that has nearly all the 
regions
-- Change loadHdfsRegionDirs() to region-level parallelism instead of 
table-level parallelism for operations.

The changes benefit all clusters but are especially noticeable for large 
clusters with a few very large tables.  On our version of 0.98 with the 
original patch we had a moderately sized production cluster with 2 (user) 
tables and ~160k regions where HBaseFsck went from taking 18 min to 5 minutes.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15347) Update CHANGES.txt for 1.3

2016-06-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335097#comment-15335097
 ] 

Mikhail Antonov commented on HBASE-15347:
-

Seems like we have 5 jiras which appear in 1.2.1 but not in 1.3 (besides "meta" 
jiras like related to 1.2 release itself)

HBASE-14277
HBASE-14581
HBASE-14730
HBASE-14915
HBASE-15224

> Update CHANGES.txt for 1.3
> --
>
> Key: HBASE-15347
> URL: https://issues.apache.org/jira/browse/HBASE-15347
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 1.3.0
>
>
> Going to post the steps in preparing changes file for 1.3 here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15347) Update CHANGES.txt for 1.3

2016-06-16 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15347:

Description: Going to post the steps in preparing changes file for 1.3 here.

> Update CHANGES.txt for 1.3
> --
>
> Key: HBASE-15347
> URL: https://issues.apache.org/jira/browse/HBASE-15347
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 1.3.0
>
>
> Going to post the steps in preparing changes file for 1.3 here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335094#comment-15335094
 ] 

stack commented on HBASE-14743:
---

[~reidchan] I tried it. Looks good in visualvm (see attached). Maybe call it 
Memory rather the Heap Memory unless there is something that might preclude our 
adding offheap metrics down the road? Does it work for you [~reidchan]? I tried 
loading stuff and reading but the metrics didn't change... 

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.001.patch, HBASE-14743.002.patch, 
> HBASE-14743.003.patch, HBASE-14743.004.patch, HBASE-14743.005.patch, 
> HBASE-14743.006.patch, HBASE-14743.007.patch, HBASE-14743.008.patch, 
> HBASE-14743.009.patch, HBASE-14743.009.v2.patch, Screen Shot 2016-06-16 at 
> 5.39.13 PM.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14743) Add metrics around HeapMemoryManager

2016-06-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14743:
--
Attachment: Screen Shot 2016-06-16 at 5.39.13 PM.png

Picture of nice new bean but the values don't seem to be getting updated.

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.001.patch, HBASE-14743.002.patch, 
> HBASE-14743.003.patch, HBASE-14743.004.patch, HBASE-14743.005.patch, 
> HBASE-14743.006.patch, HBASE-14743.007.patch, HBASE-14743.008.patch, 
> HBASE-14743.009.patch, HBASE-14743.009.v2.patch, Screen Shot 2016-06-16 at 
> 5.39.13 PM.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15429) Add a split policy for busy regions

2016-06-16 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335093#comment-15335093
 ] 

Elliott Clark commented on HBASE-15429:
---

Pushed to master. I couldn't get it to apply to branch-1. If you want this in 
branch-1 can you attach a patch that applies

> Add a split policy for busy regions
> ---
>
> Key: HBASE-15429
> URL: https://issues.apache.org/jira/browse/HBASE-15429
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15429-V1.patch, HBASE-15429-V2.patch, 
> HBASE-15429.patch
>
>
> Currently, all region split policies are based on size. However, in certain 
> cases, it is a wise choice to make a split decision based on number of 
> requests to the region and split busy regions.
> A crude metric is that if a region blocks writes often and throws 
> RegionTooBusyExceoption, it's probably a good idea to split it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16035) Nested AutoCloseables might not all get closed

2016-06-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335074#comment-15335074
 ] 

Hadoop QA commented on HBASE-16035:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 20s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 117m 38s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 167m 6s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811218/HBASE-16035-v1.patch |
| JIRA Issue | HBASE-16035 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 62a4a2c |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/jenkins-slave/tools/hudson.model.JDK

[jira] [Commented] (HBASE-15806) An endpoint-based export tool

2016-06-16 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335067#comment-15335067
 ] 

Jerry He commented on HBASE-15806:
--

Compared to the traditional RPC scan, the new approach will make the server 
handlers occupied longer and concentrated probably, affecting concurrent user 
requests.
But I think it is still a good alternative option.

hbase writes to user directory.  Will we have permission problem?
In a Kerberos cluster?


> An endpoint-based export tool
> -
>
> Key: HBASE-15806
> URL: https://issues.apache.org/jira/browse/HBASE-15806
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: Experiment.png, HBASE-15806-v1.patch, 
> HBASE-15806-v2.patch, HBASE-15806-v3.patch, HBASE-15806.patch
>
>
> The time for exporting table can be reduced, if we use the endpoint technique 
> to export the hdfs files by the region server rather than by hbase client.
> In my experiments, the elapsed time of endpoint-based export can be less than 
> half of current export tool (enable the hdfs compression)
> But the shortcomings is we need to alter table for deploying the endpoint
> any comments about this? thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16050) Fully document site building procedure

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335057#comment-15335057
 ] 

Andrew Purtell edited comment on HBASE-16050 at 6/17/16 12:21 AM:
--

Is there a document that walks through how gitpubsub works? I am now sure that 
I have no idea. Checking in isn't sufficient? This is a huge surprise. 
svnpubsub "just works" after you check changes in. 

We rely on a Jenkins job to actually publish the site? By this, I mean, there 
is a hidden step where one must kick off a job? Or an undocumented process of 
patch submission? What happens if Jenkins is unavailable, or the job 
configuration is lost, or someone with hudson-jobadmin accidentally corrupts 
it? 

I think we must have a way to generate and publish the site end to end from the 
developer desktop. This can be handled by placing a script into dev-support/ 
that can do the equivalent of the Jenkins job. 

Apologies that I was somehow totally absent earlier. Now getting involved due 
to contact friction :-/


was (Author: apurtell):
Is there a document that walks through how gitpubsub works? I am now sure that 
I have no idea. Checking in isn't sufficient? This is a huge surprise. 
svnpubsub "just works" after you check changes in. 

We rely on a Jenkins job to actually publish the site? What happens if Jenkins 
is unavailable, or the job configuration is lost, or someone with 
hudson-jobadmin accidentally corrupts it? 

I think we must have a way to generate and publish the site end to end from the 
developer desktop. This can be handled by placing a script into dev-support/ 
that can do the equivalent of the Jenkins job. 

Apologies that I was somehow totally absent earlier. Now getting involved due 
to contact friction :-/

> Fully document site building procedure
> --
>
> Key: HBASE-16050
> URL: https://issues.apache.org/jira/browse/HBASE-16050
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>
> On HBASE-15977 it's apparent some of us don't know how to publish the site 
> now that we've switched to gitpubsub and the documentation on the process is 
> incomplete. 
> I followed the instructions starting here: 
> http://hbase.apache.org/book.html#website_publish 
> Nothing happened.
> What comes after step 3 "Add and commit your changes"? This needs to be 
> documented end to end and something every committer on the project (or, 
> certainly, PMC) can accomplish. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16050) Fully document site building procedure

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335062#comment-15335062
 ] 

Andrew Purtell commented on HBASE-16050:


Oh, and I think I see one mistake I made. I checked out the 'asf-site' branch, 
made the changes, and then PUSHED it. I guess we are supposed to attach a patch 
to a JIRA somewhere?? And a Jenkins job will pick it up? Documentation in the 
online manual makes no mention of this.

> Fully document site building procedure
> --
>
> Key: HBASE-16050
> URL: https://issues.apache.org/jira/browse/HBASE-16050
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>
> On HBASE-15977 it's apparent some of us don't know how to publish the site 
> now that we've switched to gitpubsub and the documentation on the process is 
> incomplete. 
> I followed the instructions starting here: 
> http://hbase.apache.org/book.html#website_publish 
> Nothing happened.
> What comes after step 3 "Add and commit your changes"? This needs to be 
> documented end to end and something every committer on the project (or, 
> certainly, PMC) can accomplish. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16050) Fully document site building procedure

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335057#comment-15335057
 ] 

Andrew Purtell commented on HBASE-16050:


Is there a document that walks through how gitpubsub works? I am now sure that 
I have no idea. Checking in isn't sufficient? This is a huge surprise. 
svnpubsub "just works" after you check changes in. 

We rely on a Jenkins job to actually publish the site? What happens if Jenkins 
is unavailable, or the job configuration is lost, or someone with 
hudson-jobadmin accidentally corrupts it? 

I think we must have a way to generate and publish the site end to end from the 
developer desktop. This can be handled by placing a script into dev-support/ 
that can do the equivalent of the Jenkins job. 

Apologies that I was somehow totally absent earlier. Now getting involved due 
to contact friction :-/

> Fully document site building procedure
> --
>
> Key: HBASE-16050
> URL: https://issues.apache.org/jira/browse/HBASE-16050
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>
> On HBASE-15977 it's apparent some of us don't know how to publish the site 
> now that we've switched to gitpubsub and the documentation on the process is 
> incomplete. 
> I followed the instructions starting here: 
> http://hbase.apache.org/book.html#website_publish 
> Nothing happened.
> What comes after step 3 "Add and commit your changes"? This needs to be 
> documented end to end and something every committer on the project (or, 
> certainly, PMC) can accomplish. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16050) Fully document site building procedure

2016-06-16 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335040#comment-15335040
 ] 

Dima Spivak commented on HBASE-16050:
-

So I think the email that gets sent out to dev@ (subject line: "Successful: 
HBase Generate Website") has all the instructions on how to actually push it 
the site live after the Jenkins job that sends that email builds the artifacts. 
Perhaps it would be a good idea to replace the details in the ref guide of what 
the [hbase_generate_website Jenkins 
job|https://builds.apache.org/job/hbase_generate_website/] does, to just say 
"Go run this job?" At the moment, there's that gap between people in-the-know 
(i.e. [~misty], who built all this awesome automation to make life easy) and 
people who've never pushed the site live before that we can try to reduce the 
friction for.

> Fully document site building procedure
> --
>
> Key: HBASE-16050
> URL: https://issues.apache.org/jira/browse/HBASE-16050
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>
> On HBASE-15977 it's apparent some of us don't know how to publish the site 
> now that we've switched to gitpubsub and the documentation on the process is 
> incomplete. 
> I followed the instructions starting here: 
> http://hbase.apache.org/book.html#website_publish 
> Nothing happened.
> What comes after step 3 "Add and commit your changes"? This needs to be 
> documented end to end and something every committer on the project (or, 
> certainly, PMC) can accomplish. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16048) Consider tagging InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC)

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335037#comment-15335037
 ] 

Andrew Purtell edited comment on HBASE-16048 at 6/17/16 12:10 AM:
--

It's a fair question [~jrottinghuis].

Because coprocessors are internal extension points there are going to be 
internal types in method signatures. When they are used as opaque references 
all is well. When we are approached by users wanting compatibility guarantees - 
because they are going to inherit as a base class or implement the interface - 
in the past we sometimes have replaced the internal type with a supportable 
interface, see HBASE-12972 and HBASE-12975 as two examples. 

In this case, InternalScanner not only is found in method signatures but 
coprocessor users, in order to extend scanning, must implement it. It is 
already a stable interface. Would it not be what we would come up with if 
performing another exercise here like HBASE-12972? 

Therefore I am +1 for promoting it to LP


was (Author: apurtell):
It's a fair question [~jrottinghuis].

Because coprocessors are internal extension points there are going to be 
internal types in method signatures. When they are used as opaque references 
all is well. When we are approached by users wanting compatibility guarantees - 
because they are going to inherit as a base class or implement the interface - 
in the past we sometimes have replaced the internal type with a supportable 
interface, see HBASE-12972 and HBASE-12975 as two examples. 

In this case, InternalScanner not only is found in method signatures but 
coprocessor users, in order to extend scanning, must implement it. It is 
already a stable interface. Would it not be what we would come up with if 
performing another exercise here like HBASE-12972? 

> Consider tagging InternalScanner with 
> LimitedPrivate(HBaseInterfaceAudience.COPROC) 
> 
>
> Key: HBASE-16048
> URL: https://issues.apache.org/jira/browse/HBASE-16048
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> Some methods (preCompact, preCompactScannerOpen, preFlush, 
> preFlushScannerOpen, etc) of BaseRegionObserver take InternalScanner as input 
> argument as well as the return type.
> BaseRegionObserver is tagged with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) but 
> InternalScanner is tagged with @InterfaceAudience.Private.
> This JIRA is to discuss tagging InternalScanner with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16048) Consider tagging InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC)

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335037#comment-15335037
 ] 

Andrew Purtell commented on HBASE-16048:


It's a fair question [~jrottinghuis].

Because coprocessors are internal extension points there are going to be 
internal types in method signatures. When they are used as opaque references 
all is well. When we are approached by users wanting compatibility guarantees - 
because they are going to inherit as a base class or implement the interface - 
in the past we sometimes have replaced the internal type with a supportable 
interface, see HBASE-12972 and HBASE-12975 as two examples. 

In this case, InternalScanner not only is found in method signatures but 
coprocessor users, in order to extend scanning, must implement it. It is 
already a stable interface. Would it not be what we would come up with if 
performing another exercise here like HBASE-12972? 

> Consider tagging InternalScanner with 
> LimitedPrivate(HBaseInterfaceAudience.COPROC) 
> 
>
> Key: HBASE-16048
> URL: https://issues.apache.org/jira/browse/HBASE-16048
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> Some methods (preCompact, preCompactScannerOpen, preFlush, 
> preFlushScannerOpen, etc) of BaseRegionObserver take InternalScanner as input 
> argument as well as the return type.
> BaseRegionObserver is tagged with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) but 
> InternalScanner is tagged with @InterfaceAudience.Private.
> This JIRA is to discuss tagging InternalScanner with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15977) Failed variable substitution on home page

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335018#comment-15335018
 ] 

Andrew Purtell commented on HBASE-15977:


Please see 
https://issues.apache.org/jira/browse/HBASE-16050?focusedCommentId=15335016&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15335016

> Failed variable substitution on home page
> -
>
> Key: HBASE-15977
> URL: https://issues.apache.org/jira/browse/HBASE-15977
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Nick Dimiduk
>Assignee: Dima Spivak
> Fix For: 2.0.0
>
> Attachments: HBASE-15977.patch, banner.name.png
>
>
> Check out the top-left of hbase.apache.org, there's an unevaluated variable 
> {{$banner.name}} leaking through.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16050) Fully document site building procedure

2016-06-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335016#comment-15335016
 ] 

Andrew Purtell commented on HBASE-16050:


How does one do gitpubsub? We have a branch 'asf-site' in 
https://git-wip-us.apache.org/repos/asf/hbase . I checked that out, updated, 
and pushed it up. Didn't change the site. 

> Fully document site building procedure
> --
>
> Key: HBASE-16050
> URL: https://issues.apache.org/jira/browse/HBASE-16050
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>
> On HBASE-15977 it's apparent some of us don't know how to publish the site 
> now that we've switched to gitpubsub and the documentation on the process is 
> incomplete. 
> I followed the instructions starting here: 
> http://hbase.apache.org/book.html#website_publish 
> Nothing happened.
> What comes after step 3 "Add and commit your changes"? This needs to be 
> documented end to end and something every committer on the project (or, 
> certainly, PMC) can accomplish. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-16 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335014#comment-15335014
 ] 

Gary Helmling commented on HBASE-5291:
--

+1 on addendum 2.

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 5291-addendum.2, HBASE-5291-addendum.patch, 
> HBASE-5291.001.patch, HBASE-5291.002.patch, HBASE-5291.003.patch, 
> HBASE-5291.004.patch, HBASE-5291.005-0.98.patch, 
> HBASE-5291.005-branch-1.patch, HBASE-5291.005.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15977) Failed variable substitution on home page

2016-06-16 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335010#comment-15335010
 ] 

Misty Stanley-Jones commented on HBASE-15977:
-

We are doing gitsubpub. That's what I did, following the instructions in my 
email. It is not still broken now, at least not for me. You can see if 
gitsubpub worked by checking the page generation date in the footer.

> Failed variable substitution on home page
> -
>
> Key: HBASE-15977
> URL: https://issues.apache.org/jira/browse/HBASE-15977
> Project: HBase
>  Issue Type: Bug
>  Components: website
>Reporter: Nick Dimiduk
>Assignee: Dima Spivak
> Fix For: 2.0.0
>
> Attachments: HBASE-15977.patch, banner.name.png
>
>
> Check out the top-left of hbase.apache.org, there's an unevaluated variable 
> {{$banner.name}} leaking through.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-5291:
--
Attachment: 5291-addendum.2

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 5291-addendum.2, HBASE-5291-addendum.patch, 
> HBASE-5291.001.patch, HBASE-5291.002.patch, HBASE-5291.003.patch, 
> HBASE-5291.004.patch, HBASE-5291.005-0.98.patch, 
> HBASE-5291.005-branch-1.patch, HBASE-5291.005.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14869) Better request latency and size histograms

2016-06-16 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334995#comment-15334995
 ] 

Mikhail Antonov commented on HBASE-14869:
-

[~busbey] that also went in 1.2, should it be present in fixVersion then?


> Better request latency and size histograms
> --
>
> Key: HBASE-14869
> URL: https://issues.apache.org/jira/browse/HBASE-14869
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Assignee: Vikas Vishwakarma
> Fix For: 2.0.0, 1.3.0, 1.0.3, 1.1.3, 0.98.17
>
> Attachments: 14869-test-0.98.txt, 14869-v1-0.98.txt, 
> 14869-v1-2.0.txt, 14869-v2-0.98.txt, 14869-v2-2.0.txt, 14869-v3-0.98.txt, 
> 14869-v4-0.98.txt, 14869-v5-0.98.txt, 14869-v6-0.98.txt, AppendSizeTime.png, 
> Get.png
>
>
> I just discussed this with a colleague.
> The get, put, etc, histograms that each region server keeps are somewhat 
> useless (depending on what you want to achieve of course), as they are 
> aggregated and calculated by each region server.
> It would be better to record the number of requests in certainly latency 
> bands in addition to what we do now.
> For example the number of gets that took 0-5ms, 6-10ms, 10-20ms, 20-50ms, 
> 50-100ms, 100-1000ms, > 1000ms, etc. (just as an example, should be 
> configurable).
> That way we can do further calculations after the fact, and answer questions 
> like: How often did we miss our SLA? Percentage of requests that missed an 
> SLA, etc.
> Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16048) Consider tagging InternalScanner with LimitedPrivate(HBaseInterfaceAudience.COPROC)

2016-06-16 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334993#comment-15334993
 ] 

Joep Rottinghuis commented on HBASE-16048:
--

Isn't the question here: Can anybody reasonably use BaseRegionObserver when 
implementing a coprocessor but not use InternalScanner?
If the answer is no, then by inference InternalScanner must be tagged with 
LimitedPrivate(HBaseInterfaceAudience.COPROC) right?

> Consider tagging InternalScanner with 
> LimitedPrivate(HBaseInterfaceAudience.COPROC) 
> 
>
> Key: HBASE-16048
> URL: https://issues.apache.org/jira/browse/HBASE-16048
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> Some methods (preCompact, preCompactScannerOpen, preFlush, 
> preFlushScannerOpen, etc) of BaseRegionObserver take InternalScanner as input 
> argument as well as the return type.
> BaseRegionObserver is tagged with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) but 
> InternalScanner is tagged with @InterfaceAudience.Private.
> This JIRA is to discuss tagging InternalScanner with 
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5291) Add Kerberos HTTP SPNEGO authentication support to HBase web consoles

2016-06-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15334960#comment-15334960
 ] 

Ted Yu commented on HBASE-5291:
---

Addendum was committed to both branches.

> Add Kerberos HTTP SPNEGO authentication support to HBase web consoles
> -
>
> Key: HBASE-5291
> URL: https://issues.apache.org/jira/browse/HBASE-5291
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, security
>Reporter: Andrew Purtell
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-5291-addendum.patch, HBASE-5291.001.patch, 
> HBASE-5291.002.patch, HBASE-5291.003.patch, HBASE-5291.004.patch, 
> HBASE-5291.005-0.98.patch, HBASE-5291.005-branch-1.patch, HBASE-5291.005.patch
>
>
> Like HADOOP-7119, the same motivations:
> {quote}
> Hadoop RPC already supports Kerberos authentication. 
> {quote}
> As does the HBase secure RPC engine.
> {quote}
> Kerberos enables single sign-on.
> Popular browsers (Firefox and Internet Explorer) have support for Kerberos 
> HTTP SPNEGO.
> Adding support for Kerberos HTTP SPNEGO to [HBase] web consoles would provide 
> a unified authentication mechanism and single sign-on for web UI and RPC.
> {quote}
> Also like HADOOP-7119, the same solution:
> A servlet filter is configured in front of all Hadoop web consoles for 
> authentication.
> This filter verifies if the incoming request is already authenticated by the 
> presence of a signed HTTP cookie. If the cookie is present, its signature is 
> valid and its value didn't expire; then the request continues its way to the 
> page invoked by the request. If the cookie is not present, it is invalid or 
> it expired; then the request is delegated to an authenticator handler. The 
> authenticator handler then is responsible for requesting/validating the 
> user-agent for the user credentials. This may require one or more additional 
> interactions between the authenticator handler and the user-agent (which will 
> be multiple HTTP requests). Once the authenticator handler verifies the 
> credentials and generates an authentication token, a signed cookie is 
> returned to the user-agent for all subsequent invocations.
> The authenticator handler is pluggable and 2 implementations are provided out 
> of the box: pseudo/simple and kerberos.
> 1. The pseudo/simple authenticator handler is equivalent to the Hadoop 
> pseudo/simple authentication. It trusts the value of the user.name query 
> string parameter. The pseudo/simple authenticator handler supports an 
> anonymous mode which accepts any request without requiring the user.name 
> query string parameter to create the token. This is the default behavior, 
> preserving the behavior of the HBase web consoles before this patch.
> 2. The kerberos authenticator handler implements the Kerberos HTTP SPNEGO 
> implementation. This authenticator handler will generate a token only if a 
> successful Kerberos HTTP SPNEGO interaction is performed between the 
> user-agent and the authenticator. Browsers like Firefox and Internet Explorer 
> support Kerberos HTTP SPNEGO.
> We can build on the support added to Hadoop via HADOOP-7119. Should just be a 
> matter of wiring up the filter to our infoservers in a similar manner. 
> And from 
> https://issues.apache.org/jira/browse/HBASE-5050?focusedCommentId=13171086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13171086
> {quote}
> Hadoop 0.23 onwards has a hadoop-auth artifact that provides SPNEGO/Kerberos 
> authentication for webapps via a filter. You should consider using it. You 
> don't have to move Hbase to 0.23 for that, just consume the hadoop-auth 
> artifact, which has no dependencies on the rest of Hadoop 0.23 artifacts.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >