[jira] [Created] (HBASE-24345) [ACL] renameRSGroup should require Admin level permission

2020-05-07 Thread Reid Chan (Jira)
Reid Chan created HBASE-24345:
-

 Summary: [ACL] renameRSGroup should require Admin level permission
 Key: HBASE-24345
 URL: https://issues.apache.org/jira/browse/HBASE-24345
 Project: HBase
  Issue Type: Improvement
  Components: acl, rsgroup
Reporter: Reid Chan
Assignee: Reid Chan






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24344) Release 2.2.5

2020-05-07 Thread Guanghao Zhang (Jira)
Guanghao Zhang created HBASE-24344:
--

 Summary: Release 2.2.5
 Key: HBASE-24344
 URL: https://issues.apache.org/jira/browse/HBASE-24344
 Project: HBase
  Issue Type: Umbrella
Reporter: Guanghao Zhang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24310) Use Slf4jRequestLog for hbase-http

2020-05-07 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-24310.
---
Fix Version/s: 2.3.0
   3.0.0-alpha-1
 Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
   Resolution: Fixed

Pushed to branch-2.3+.

Thanks [~stack] for reviewing.

> Use Slf4jRequestLog for hbase-http
> --
>
> Key: HBASE-24310
> URL: https://issues.apache.org/jira/browse/HBASE-24310
> Project: HBase
>  Issue Type: Sub-task
>  Components: logging
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> To remove the direct dependency on log4j in hbase-http server.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24343) Document how to config request log for master and rs info server

2020-05-07 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-24343:
-

 Summary: Document how to config request log for master and rs info 
server
 Key: HBASE-24343
 URL: https://issues.apache.org/jira/browse/HBASE-24343
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Duo Zhang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Recent experience with Chaos Monkey?

2020-05-07 Thread Zach York
I should note that I was using HBase 2.2.3 to test.

On Thu, May 7, 2020 at 5:26 PM Zach York 
wrote:

> I recently ran ITBLL with Chaos monkey[1] against a real HBase
> installation (EMR). I initially tried to run it locally, but couldn't get
> it working and eventually gave up.
>
> > So I'm curious if this matches others' experience running the monkey. For
> example, do you have an environment more resilient than mine, one where an
> external actor is restarting downed processed without the monkey action's
> involvement?
>
> It actually performs even worse in this case in my experience since Chaos
> monkey can consider the failure mechanism to have failed (and eventually
> times out)
> because the process is too quick to recover (or the recovery fails because
> the process is already running). The only way I was able to get it to run
> was to disable
> the process that automatically restarts killed processes in my system.
>
> One other thing I hit was the validation for a suspended process was
> incorrect so if chaos monkey tried to suspend the process the run would
> fail. I'll put up a JIRA for that.
>
> This brings up a discussion on whether the ITBLL (or whatever process)
> should even continue if either a killing or recovering action failed. I
> would argue that invalidates the entire test,
> but it might not be obvious it failed unless you were watching the logs as
> it went.
>
> Thanks,
> Zach
>
>
> [1] sudo -u hbase hbase
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList -m serverKilling
> loop 4 2 100 ${RANDOM} 10
>
> On Thu, May 7, 2020 at 5:05 PM Nick Dimiduk  wrote:
>
>> Hello,
>>
>> Does anyone have recent experience running Chaos Monkey? Are you running
>> against an external cluster, or one of the other modes? What monkey
>> factory
>> are you using? Any property overrides? A non-default ClusterManager?
>>
>> I'm trying to run ITBLL with chaos against branch-2.3 and I'm not having
>> much luck. My environment is an "external" cluster, 4 racks of 4 hosts
>> each, the relatively simple "serverKilling" factory with
>> `rolling.batch.suspend.rs.ratio = 0.0`. So, randomly kill various hosts
>> on
>> various scheduled, plus some balancer play mixed in; no process
>> suspension.
>>
>> Running for any length of time (~30 minutes) the chaos monkey eventually
>> terminates between a majority and all of the hosts in the cluster. My logs
>> are peppered with warnings such as the below. There are other variants. As
>> far as I can tell, actions are intended to cause some harm and then
>> restore
>> state after themselves. In practice, the harm is successful but
>> restoration
>> rarely succeeds. Mostly these actions are "safeguarded" but this 60-sec
>> timeout. The result is a methodical termination of the cluster.
>>
>> So I'm curious if this matches others' experience running the monkey. For
>> example, do you have an environment more resilient than mine, one where an
>> external actor is restarting downed processed without the monkey action's
>> involvement? Is the monkey designed to run only in such an environment?
>> These timeouts are configurable; are you cranking them way up?
>>
>> Any input you have would be greatly appreciated. This is my last major
>> action item blocking initial 2.3.0 release candidates.
>>
>> Thanks,
>> Nick
>>
>> 20/05/05 21:19:29 WARN policies.Policy: Exception occurred during
>> performing action: java.io.IOException: did timeout 6ms waiting for
>> region server to start: host-a.example.com
>> at
>>
>> org.apache.hadoop.hbase.HBaseCluster.waitForRegionServerToStart(HBaseCluster.java:163)
>> at
>> org.apache.hadoop.hbase.chaos.actions.Action.startRs(Action.java:228)
>> at
>>
>> org.apache.hadoop.hbase.chaos.actions.RestartActionBaseAction.gracefulRestartRs(RestartActionBaseAction.java:70)
>> at
>>
>> org.apache.hadoop.hbase.chaos.actions.GracefulRollingRestartRsAction.perform(GracefulRollingRestartRsAction.java:61)
>> at
>>
>> org.apache.hadoop.hbase.chaos.policies.DoActionsOncePolicy.runOneIteration(DoActionsOncePolicy.java:50)
>> at
>>
>> org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41)
>> at
>>
>> org.apache.hadoop.hbase.chaos.policies.CompositeSequentialPolicy.run(CompositeSequentialPolicy.java:42)
>> at java.base/java.lang.Thread.run(Thread.java:834)
>>
>


Re: Recent experience with Chaos Monkey?

2020-05-07 Thread Zach York
I recently ran ITBLL with Chaos monkey[1] against a real HBase installation
(EMR). I initially tried to run it locally, but couldn't get it working and
eventually gave up.

> So I'm curious if this matches others' experience running the monkey. For
example, do you have an environment more resilient than mine, one where an
external actor is restarting downed processed without the monkey action's
involvement?

It actually performs even worse in this case in my experience since Chaos
monkey can consider the failure mechanism to have failed (and eventually
times out)
because the process is too quick to recover (or the recovery fails because
the process is already running). The only way I was able to get it to run
was to disable
the process that automatically restarts killed processes in my system.

One other thing I hit was the validation for a suspended process was
incorrect so if chaos monkey tried to suspend the process the run would
fail. I'll put up a JIRA for that.

This brings up a discussion on whether the ITBLL (or whatever process)
should even continue if either a killing or recovering action failed. I
would argue that invalidates the entire test,
but it might not be obvious it failed unless you were watching the logs as
it went.

Thanks,
Zach


[1] sudo -u hbase hbase
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList -m serverKilling
loop 4 2 100 ${RANDOM} 10

On Thu, May 7, 2020 at 5:05 PM Nick Dimiduk  wrote:

> Hello,
>
> Does anyone have recent experience running Chaos Monkey? Are you running
> against an external cluster, or one of the other modes? What monkey factory
> are you using? Any property overrides? A non-default ClusterManager?
>
> I'm trying to run ITBLL with chaos against branch-2.3 and I'm not having
> much luck. My environment is an "external" cluster, 4 racks of 4 hosts
> each, the relatively simple "serverKilling" factory with
> `rolling.batch.suspend.rs.ratio = 0.0`. So, randomly kill various hosts on
> various scheduled, plus some balancer play mixed in; no process suspension.
>
> Running for any length of time (~30 minutes) the chaos monkey eventually
> terminates between a majority and all of the hosts in the cluster. My logs
> are peppered with warnings such as the below. There are other variants. As
> far as I can tell, actions are intended to cause some harm and then restore
> state after themselves. In practice, the harm is successful but restoration
> rarely succeeds. Mostly these actions are "safeguarded" but this 60-sec
> timeout. The result is a methodical termination of the cluster.
>
> So I'm curious if this matches others' experience running the monkey. For
> example, do you have an environment more resilient than mine, one where an
> external actor is restarting downed processed without the monkey action's
> involvement? Is the monkey designed to run only in such an environment?
> These timeouts are configurable; are you cranking them way up?
>
> Any input you have would be greatly appreciated. This is my last major
> action item blocking initial 2.3.0 release candidates.
>
> Thanks,
> Nick
>
> 20/05/05 21:19:29 WARN policies.Policy: Exception occurred during
> performing action: java.io.IOException: did timeout 6ms waiting for
> region server to start: host-a.example.com
> at
>
> org.apache.hadoop.hbase.HBaseCluster.waitForRegionServerToStart(HBaseCluster.java:163)
> at
> org.apache.hadoop.hbase.chaos.actions.Action.startRs(Action.java:228)
> at
>
> org.apache.hadoop.hbase.chaos.actions.RestartActionBaseAction.gracefulRestartRs(RestartActionBaseAction.java:70)
> at
>
> org.apache.hadoop.hbase.chaos.actions.GracefulRollingRestartRsAction.perform(GracefulRollingRestartRsAction.java:61)
> at
>
> org.apache.hadoop.hbase.chaos.policies.DoActionsOncePolicy.runOneIteration(DoActionsOncePolicy.java:50)
> at
>
> org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41)
> at
>
> org.apache.hadoop.hbase.chaos.policies.CompositeSequentialPolicy.run(CompositeSequentialPolicy.java:42)
> at java.base/java.lang.Thread.run(Thread.java:834)
>


Recent experience with Chaos Monkey?

2020-05-07 Thread Nick Dimiduk
Hello,

Does anyone have recent experience running Chaos Monkey? Are you running
against an external cluster, or one of the other modes? What monkey factory
are you using? Any property overrides? A non-default ClusterManager?

I'm trying to run ITBLL with chaos against branch-2.3 and I'm not having
much luck. My environment is an "external" cluster, 4 racks of 4 hosts
each, the relatively simple "serverKilling" factory with
`rolling.batch.suspend.rs.ratio = 0.0`. So, randomly kill various hosts on
various scheduled, plus some balancer play mixed in; no process suspension.

Running for any length of time (~30 minutes) the chaos monkey eventually
terminates between a majority and all of the hosts in the cluster. My logs
are peppered with warnings such as the below. There are other variants. As
far as I can tell, actions are intended to cause some harm and then restore
state after themselves. In practice, the harm is successful but restoration
rarely succeeds. Mostly these actions are "safeguarded" but this 60-sec
timeout. The result is a methodical termination of the cluster.

So I'm curious if this matches others' experience running the monkey. For
example, do you have an environment more resilient than mine, one where an
external actor is restarting downed processed without the monkey action's
involvement? Is the monkey designed to run only in such an environment?
These timeouts are configurable; are you cranking them way up?

Any input you have would be greatly appreciated. This is my last major
action item blocking initial 2.3.0 release candidates.

Thanks,
Nick

20/05/05 21:19:29 WARN policies.Policy: Exception occurred during
performing action: java.io.IOException: did timeout 6ms waiting for
region server to start: host-a.example.com
at
org.apache.hadoop.hbase.HBaseCluster.waitForRegionServerToStart(HBaseCluster.java:163)
at
org.apache.hadoop.hbase.chaos.actions.Action.startRs(Action.java:228)
at
org.apache.hadoop.hbase.chaos.actions.RestartActionBaseAction.gracefulRestartRs(RestartActionBaseAction.java:70)
at
org.apache.hadoop.hbase.chaos.actions.GracefulRollingRestartRsAction.perform(GracefulRollingRestartRsAction.java:61)
at
org.apache.hadoop.hbase.chaos.policies.DoActionsOncePolicy.runOneIteration(DoActionsOncePolicy.java:50)
at
org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41)
at
org.apache.hadoop.hbase.chaos.policies.CompositeSequentialPolicy.run(CompositeSequentialPolicy.java:42)
at java.base/java.lang.Thread.run(Thread.java:834)


[jira] [Resolved] (HBASE-24250) CatalogJanitor resubmits GCMultipleMergedRegionsProcedure for the same region

2020-05-07 Thread Huaxiang Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huaxiang Sun resolved HBASE-24250.
--
Fix Version/s: 2.4.0
   2.3.0
   3.0.0-alpha-1
   Resolution: Fixed

> CatalogJanitor resubmits GCMultipleMergedRegionsProcedure for the same region
> -
>
> Key: HBASE-24250
> URL: https://issues.apache.org/jira/browse/HBASE-24250
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.2.4
> Environment: hdfs 3.1.3 with erasure coding
> hbase 2.2.4
>Reporter: Andrey Elenskiy
>Assignee: niuyulin
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0
>
>
> If a lot of regions were merged (due to change of region sizes, for example), 
> there can be a long backlog of procedures to clean up the merged regions. If 
> going through this backlog is slower than the CatalogJanitor's scan interval, 
> it will end resubmitting GCMultipleMergedRegionsProcedure for the same 
> regions over and over again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24338) [Flakey Tests] NPE in TestRaceBetweenSCPAndDTP

2020-05-07 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-24338.
---
Fix Version/s: 2.3.0
   3.0.0-alpha-1
 Hadoop Flags: Reviewed
   Resolution: Fixed

Pushed to 2.3+. Thanks for reviews [~binlijin] and [~zhangduo]

> [Flakey Tests] NPE in TestRaceBetweenSCPAndDTP
> --
>
> Key: HBASE-24338
> URL: https://issues.apache.org/jira/browse/HBASE-24338
> Project: HBase
>  Issue Type: Bug
>  Components: flakies, test
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> Seen in local runs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24295) [Chaos Monkey] abstract logging through the class hierarchy

2020-05-07 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-24295.
--
Resolution: Fixed

> [Chaos Monkey] abstract logging through the class hierarchy
> ---
>
> Key: HBASE-24295
> URL: https://issues.apache.org/jira/browse/HBASE-24295
> Project: HBase
>  Issue Type: Task
>  Components: integration tests
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> Running chaos monkey and watching the logs, it's very difficult to tell what 
> actions are actually running. There's lots of shared methods through the 
> class hierarchy that extends from {{abstract class Action}}, and each class 
> comes with its own {{Logger}}. As a result, the logs have useless stuff like
> {noformat}
> INFO actions.Action: Started regionserver...
> {noformat}
> Add {{protected abstract Logger getLogger()}} to the class's internal 
> interface, and have the concrete implementations provide their logger.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24342) [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as it can't pass 100% of the time

2020-05-07 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-24342.
---
Fix Version/s: 2.3.0
   3.0.0-alpha-1
   Resolution: Fixed

Pushed to branch-2.3+.

branch-2 failed last night because of this test failure 
https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2/ 
#2647

> [Flakey Tests] Disable TestClusterPortAssignment.testClusterPortAssignment as 
> it can't pass 100% of the time
> 
>
> Key: HBASE-24342
> URL: https://issues.apache.org/jira/browse/HBASE-24342
> Project: HBase
>  Issue Type: Bug
>  Components: flakies, test
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> This is a BindException special. We get randomFreePort and then put up the 
> procesess.
> {code}
> 2020-05-07 00:30:15,844 INFO  [Time-limited test] http.HttpServer(1080): 
> HttpServer.start() threw a non Bind IOException
> java.net.BindException: Port in use: 0.0.0.0:59568
>   at 
> org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1146)
>   at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:1077)
>   at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:148)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:2133)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:670)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:511)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:132)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:239)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:181)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:245)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:115)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1178)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1142)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1106)
>   at 
> org.apache.hadoop.hbase.TestClusterPortAssignment.testClusterPortAssignment(TestClusterPortAssignment.java:57)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at 
> org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:38)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
>   at 

[jira] [Created] (HBASE-24342) [Flakey Tests] TestClusterPortAssignment.testClusterPortAssignment

2020-05-07 Thread Michael Stack (Jira)
Michael Stack created HBASE-24342:
-

 Summary: [Flakey Tests] 
TestClusterPortAssignment.testClusterPortAssignment
 Key: HBASE-24342
 URL: https://issues.apache.org/jira/browse/HBASE-24342
 Project: HBase
  Issue Type: Bug
  Components: flakies, test
Reporter: Michael Stack
Assignee: Michael Stack


This is a BindException special. We get randomFreePort and then put up the 
procesess.

{code}
2020-05-07 00:30:15,844 INFO  [Time-limited test] http.HttpServer(1080): 
HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: 0.0.0.0:59568
at 
org.apache.hadoop.hbase.http.HttpServer.openListeners(HttpServer.java:1146)
at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:1077)
at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:148)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:2133)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:670)
at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:511)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:132)
at 
org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:239)
at 
org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:181)
at 
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:245)
at 
org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:115)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1178)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1142)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1106)
at 
org.apache.hadoop.hbase.TestClusterPortAssignment.testClusterPortAssignment(TestClusterPortAssignment.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:38)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:351)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:319)
at 

[jira] [Resolved] (HBASE-24335) Support deleteall with ts but without column in shell mode

2020-05-07 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil resolved HBASE-24335.
--
Resolution: Fixed

Thanks for the contribution, [~filtertip]! Had pushed it to master, branch-2 
and branch-2.3 branches.

> Support deleteall with ts but without column in shell mode
> --
>
> Key: HBASE-24335
> URL: https://issues.apache.org/jira/browse/HBASE-24335
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.0.0-alpha-1
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0
>
>
> The position after rowkey is column, so if we can't only specify ts now.
>  My proposal is use a empty string to represent no column specified.
>  useage: 
>  deleteall 'test','r1','',158876590
>  deleteall 'test', \{ROWPREFIXFILTER => 'prefix'}, '', 158876590



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (HBASE-24295) [Chaos Monkey] abstract logging through the class hierarchy

2020-05-07 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk reopened HBASE-24295:
--

Reopening for addendum.

> [Chaos Monkey] abstract logging through the class hierarchy
> ---
>
> Key: HBASE-24295
> URL: https://issues.apache.org/jira/browse/HBASE-24295
> Project: HBase
>  Issue Type: Task
>  Components: integration tests
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> Running chaos monkey and watching the logs, it's very difficult to tell what 
> actions are actually running. There's lots of shared methods through the 
> class hierarchy that extends from {{abstract class Action}}, and each class 
> comes with its own {{Logger}}. As a result, the logs have useless stuff like
> {noformat}
> INFO actions.Action: Started regionserver...
> {noformat}
> Add {{protected abstract Logger getLogger()}} to the class's internal 
> interface, and have the concrete implementations provide their logger.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24341) The region should be removed from ConfigurationManager as a ConfigurationObserver when the region is closed

2020-05-07 Thread Junhong Xu (Jira)
Junhong Xu created HBASE-24341:
--

 Summary: The region should be removed from ConfigurationManager as 
a ConfigurationObserver when the region is closed
 Key: HBASE-24341
 URL: https://issues.apache.org/jira/browse/HBASE-24341
 Project: HBase
  Issue Type: Improvement
Reporter: Junhong Xu
Assignee: Junhong Xu


When the region is opened, we register the region to the ConfigurationManager 
as a ConfigurationObserver object.However, when the region is closed, we don't 
deregister the region as a ConfigurationObserver object from the 
ConfigurationManager correspondingly. It's not a bug for now cos we do nothing 
at the region level(we do things at store level actually) when the 
configuration is updated  whenever the region is open or not for now.But it is 
bug-prone, and we should remove it from the ConfigureManager object when the 
region is closed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)