Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang

2020-05-14 Thread Ankit Singhal
Congratulations, Wei-Chiu! Welcome!

On Thu, May 14, 2020 at 3:49 PM Toshihiro Suzuki 
wrote:

> Congratulations Wei-Chiu!
>
> On Thu, May 14, 2020 at 8:25 PM Guangxu Cheng  wrote:
>
> > Congratulations and welcome Wei-Chiu !!!
> > --
> > Best Regards,
> > Guangxu
> >
> >
> > Reid Chan  于2020年5月14日周四 下午6:59写道:
> >
> > >
> > > Congratulations and welcome!
> > >
> > >
> > > --
> > >
> > > Best regards,
> > > R.C
> > >
> > >
> > >
> > > 
> > > From: ramkrishna vasudevan 
> > > Sent: 14 May 2020 13:42
> > > To: dev
> > > Subject: Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang
> > >
> > > Congratulations Wei-Chiu !!!
> > >
> > > Regards
> > > Ram
> > >
> > > On Thu, May 14, 2020 at 10:55 AM Viraj Jasani 
> > wrote:
> > >
> > > > Congratulations Wei-Chiu !!
> > > >
> > > > On 2020/05/13 19:12:38, Sean Busbey  wrote:
> > > > > Folks,
> > > > >
> > > > > On behalf of the Apache HBase PMC I am pleased to announce that
> > > Wei-Chiu
> > > > > Chuang has accepted the PMC's invitation to become a committer on
> the
> > > > > project.
> > > > >
> > > > > We appreciate all of the great contributions Wei-Chiu has made to
> the
> > > > > community thus far and we look forward to his continued
> involvement.
> > > > >
> > > > > Allow me to be the first to congratulate Wei-Chiu on his new role!
> > > > >
> > > > > thanks,
> > > > > busbey
> > > > >
> > > >
> > >
> >
>


[jira] [Resolved] (HBASE-24364) [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction

2020-05-14 Thread Yi Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei resolved HBASE-24364.

Fix Version/s: 2.2.5
   2.3.0
   3.0.0-alpha-1
   Resolution: Fixed

> [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction
> --
>
> Key: HBASE-24364
> URL: https://issues.apache.org/jira/browse/HBASE-24364
> Project: HBase
>  Issue Type: Bug
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.5
>
>
> I found the following exception when I run ITBLL:
> {code:java}
> 2020-05-12 11:43:14,201 WARN  [ChaosMonkey] policies.Policy: Exception 
> performing action:
> java.lang.IllegalArgumentException: There is no data block encoder for given 
> id '6'
> at 
> org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.getEncodingById(DataBlockEncoding.java:168)
> at 
> org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.lambda$perform$0(ChangeEncodingAction.java:50)
> at 
> org.apache.hadoop.hbase.chaos.actions.Action.modifyAllTableColumns(Action.java:356)
> at 
> org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.perform(ChangeEncodingAction.java:48)
> at 
> org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59)
> at 
> org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Because PREFIX_TREE is removed in DataBlockEncoding:
> {code:java}
> /** Disable data block encoding. */
> NONE(0, null),
> // id 1 is reserved for the BITSET algorithm to be added later
> PREFIX(2, "org.apache.hadoop.hbase.io.encoding.PrefixKeyDeltaEncoder"),
> DIFF(3, "org.apache.hadoop.hbase.io.encoding.DiffKeyDeltaEncoder"),
> FAST_DIFF(4, "org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder"),
> // id 5 is reserved for the COPY_KEY algorithm for benchmarking
> // COPY_KEY(5, "org.apache.hadoop.hbase.io.encoding.CopyKeyDataBlockEncoder"),
> // PREFIX_TREE(6, "org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec"),
> ROW_INDEX_V1(7, "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1");
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (HBASE-24165) maxPoolSize is logged incorrectly in ByteBufferPool

2020-05-14 Thread Guanghao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang reopened HBASE-24165:


Reopen as this introduce a findbugs warning.
|{color:#00}Result of integer multiplication cast to long in new 
org.apache.hadoop.hbase.io.ByteBufferPool(int, int, boolean) At 
ByteBufferPool.java:to long in new 
org.apache.hadoop.hbase.io.ByteBufferPool(int, int, boolean) At 
ByteBufferPool.java:[line 84]{color}|

> maxPoolSize is logged incorrectly in ByteBufferPool
> ---
>
> Key: HBASE-24165
> URL: https://issues.apache.org/jira/browse/HBASE-24165
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.4
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 2.2.5
>
>
> In ByteBufferPool _maxPoolSize_ is converted into byte format,
> https://github.com/apache/hbase/blob/a521a80c4b9a8b0749c368d1ff66fea2ed2d77a2/hbase-common/src/main/java/org/apache/hadoop/hbase/io/ByteBufferPool.java#L85
>  
> Currently maxPoolSize is logged as below,
> 2020-04-10 14:20:56,000 INFO  [Time-limited test] io.ByteBufferPool(83): 
> Created with bufferSize=64 KB and maxPoolSize=320 B



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24376) MergeNormalizer is merging non-adjacent regions and causing region overlaps/holes.

2020-05-14 Thread Huaxiang Sun (Jira)
Huaxiang Sun created HBASE-24376:


 Summary: MergeNormalizer is merging non-adjacent regions and 
causing region overlaps/holes.
 Key: HBASE-24376
 URL: https://issues.apache.org/jira/browse/HBASE-24376
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 2.3.0
Reporter: Huaxiang Sun
Assignee: Huaxiang Sun


Currently, we found normalizer was merging regions which are non-adjacent, it 
will cause inconsistencies in the cluster.
{code:java}
439055 2020-05-08 17:47:09,814 INFO 
org.apache.hadoop.hbase.master.normalizer.MergeNormalizationPlan: Executing 
merging normalization plan: MergeNormalizationPlan{firstRegion={ENCODED => 
47fe236a5e3649ded95cb64ad0c08492, NAME => 
'TABLE,\x03\x01\x05\x01\x04\x02,1554838974870.47fe236a5e3649ded95cb64ad   
0c08492.', STARTKEY => '\x03\x01\x05\x01\x04\x02', ENDKEY => 
'\x03\x01\x05\x01\x04\x02\x01\x02\x02201904082200\x00\x00\x03Mac\x00\x00\x00\x00\x00\x00\x00\x00\x00iMac13,1\x00\x00\x00\x00\x00\x049.3-14E260\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x05'},
 secondRegion={ENCODED => 0c0f2aa67f4329d5c4   8ba0320f173d31, NAME => 
'TABLE,\x03\x01\x05\x02\x01\x01,1554830735526.0c0f2aa67f4329d5c48ba0320f173d31.',
 STARTKEY => '\x03\x01\x05\x02\x01\x01', ENDKEY => '\x03\x01\x05\x02\x01\x02'}}
439056 2020-05-08 17:47:11,438 INFO org.apache.hadoop.hbase.ScheduledChore: 
CatalogJanitor-*:16000 average execution time: 1676219193 ns.
439057 2020-05-08 17:47:11,730 INFO org.apache.hadoop.hbase.master.HMaster: 
Client=null/null merge regions [47fe236a5e3649ded95cb64ad0c08492], 
[0c0f2aa67f4329d5c48ba0320f173d31]
 {code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24375) Point hbase-it pom value for testFailureIgnore to value defined in main hbase pom

2020-05-14 Thread Christine Feng (Jira)
Christine Feng created HBASE-24375:
--

 Summary: Point hbase-it pom value for testFailureIgnore to value 
defined in main hbase pom
 Key: HBASE-24375
 URL: https://issues.apache.org/jira/browse/HBASE-24375
 Project: HBase
  Issue Type: Task
Affects Versions: 1.6.0
Reporter: Christine Feng


The testFailureIgnore value in the hbase-it submodule pom is hard-coded to 
{{false}}; it should be changed to inherit the value defined in the main hbase 
pom to for consistency reasons.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23787) TestSyncTimeRangeTracker fails quite easily and allocates a very expensive array.

2020-05-14 Thread Mark Robert Miller (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Robert Miller resolved HBASE-23787.

Resolution: Not A Problem

I think the expensive array may have already been dealt with elsewhere.

> TestSyncTimeRangeTracker fails quite easily and allocates a very expensive 
> array.
> -
>
> Key: HBASE-23787
> URL: https://issues.apache.org/jira/browse/HBASE-23787
> Project: HBase
>  Issue Type: Test
>  Components: test
>Reporter: Mark Robert Miller
>Priority: Major
>
> I see this test fail a lot in my environments. It also uses such a large 
> array that it seems particularly memory wasteful and difficult to get good 
> contention in the test as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23849) Harden small and medium tests for lots of parallel runs with re-used jvms.

2020-05-14 Thread Mark Robert Miller (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Robert Miller resolved HBASE-23849.

Resolution: Won't Fix

Small and medium tests don't actually take too long to get going, mostly just 
have to deal with some statics issues. I had these working well on master at 
one point, but have only been looking at branch2 for a while, so not looking go 
back to that.

> Harden small and medium tests for lots of parallel runs with re-used jvms.
> --
>
> Key: HBASE-23849
> URL: https://issues.apache.org/jira/browse/HBASE-23849
> Project: HBase
>  Issue Type: Test
>Reporter: Mark Robert Miller
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang

2020-05-14 Thread Toshihiro Suzuki
Congratulations Wei-Chiu!

On Thu, May 14, 2020 at 8:25 PM Guangxu Cheng  wrote:

> Congratulations and welcome Wei-Chiu !!!
> --
> Best Regards,
> Guangxu
>
>
> Reid Chan  于2020年5月14日周四 下午6:59写道:
>
> >
> > Congratulations and welcome!
> >
> >
> > --
> >
> > Best regards,
> > R.C
> >
> >
> >
> > 
> > From: ramkrishna vasudevan 
> > Sent: 14 May 2020 13:42
> > To: dev
> > Subject: Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang
> >
> > Congratulations Wei-Chiu !!!
> >
> > Regards
> > Ram
> >
> > On Thu, May 14, 2020 at 10:55 AM Viraj Jasani 
> wrote:
> >
> > > Congratulations Wei-Chiu !!
> > >
> > > On 2020/05/13 19:12:38, Sean Busbey  wrote:
> > > > Folks,
> > > >
> > > > On behalf of the Apache HBase PMC I am pleased to announce that
> > Wei-Chiu
> > > > Chuang has accepted the PMC's invitation to become a committer on the
> > > > project.
> > > >
> > > > We appreciate all of the great contributions Wei-Chiu has made to the
> > > > community thus far and we look forward to his continued involvement.
> > > >
> > > > Allow me to be the first to congratulate Wei-Chiu on his new role!
> > > >
> > > > thanks,
> > > > busbey
> > > >
> > >
> >
>


[jira] [Resolved] (HBASE-7511) Replace fsReadLatency metrics with preads latency

2020-05-14 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-7511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-7511.
--
Resolution: Later

Resolve old issue

> Replace fsReadLatency metrics with preads latency
> -
>
> Key: HBASE-7511
> URL: https://issues.apache.org/jira/browse/HBASE-7511
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Rishit Shroff
>Assignee: Rishit Shroff
>Priority: Minor
>
> With Hbase now using preads for all the read ops, the read latency metrics 
> need to be updated to capture the stats about the pread latencies. This issue 
> to track that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] New User Experience and Data Durability Guarantees on LocalFileSystem (HBASE-24086)

2020-05-14 Thread Nick Dimiduk
HBASE-24086 and HBASE-24106 have been reverted, HBASE-24271 has been
applied. Thanks for the fruitful discussion.

-n

On Fri, Apr 17, 2020 at 4:52 PM Nick Dimiduk  wrote:

> On Fri, Apr 17, 2020 at 3:31 PM Stack  wrote:
>
>> On writing to local 'tmp' dir, thats fine, but quickstart was always
>> supposed to be a transient install (its one example of setting config is
>> the setting of the tmp location). The messaging that this is the case
>> needs
>> an edit after a re-read (I volunteer to do this and to give refguide a
>> once-over on lack of guarantees when an hbase deploy is unconfigured).
>>
>
> Sounds like you're reading my handiwork, pushed in HBASE-24106. I'm
> definitely open to editing help, yes please! Before that change, the Quick
> Start section required the user to set hbase.rootdir,
> hbase.zookeeper.property.dataDir, and
> hbase.unsafe.stream.capability.enforce all before that could start the
> local process.
>
> Can we have the start-out-of-the-box back please? Its a PITA having to go
>> edit config running a local build trying to test something nevermind the
>> poor noob whose first experience is a fail.
>>
>
> I agree.
>
> The conclusion I understand from this thread looks something like this:
>
> 1. revert HBASE-24086, make it so that running on `LocalFileSystem` is a
> fatal condition with default configs.
> 2. ship a conf/hbase-site.xml that contains
> hbase.unsafe.stream.capability.enforce=false, along with a big comment
> saying this is not safe for production.
> 3. ship a conf/hbase-site.xml that contains hbase.tmp.dir=./tmp, along
> with a comment saying herein you'll find temporary and persistent data,
> reconfigure for production with hbase.rootdir pointed to a durable
> filesystem that supports our required stream capabilities (see above).
> 4. update HBASE-24106 as appropriate.
>
> Neither 2 nor 3 are suitable for production deployments, thus the changes
> do not go into hbase-default.xml. Anyone standing up a production deploy
> must edit hbase-site.xml anyway, so this doesn't change anything. It also
> restores our "simple" first-time user experience of not needing to run
> anything besides `bin/start-hbase.sh` (or `bin/hbase master start`, or
> whatever it is we're telling people these days).
>
> We can reassess this once more when a durable equivalent to
> LocalFileSystem comes along.
>
> Thanks,
> Nick
>


[jira] [Resolved] (HBASE-24271) Set values in `conf/hbase-site.xml` that enable running on `LocalFileSystem` out of the box

2020-05-14 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-24271.
--
Release Note: 

HBASE-24271 makes changes the the default `conf/hbase-site.xml` such that 
`bin/hbase` will run directly out of the binary tarball or a compiled source 
tree without any configuration modifications vs. Hadoop 2.8+. This changes our 
long-standing history of shipping no configured values in 
`conf/hbase-site.xml`, so existing processes that assume this file is empty of 
configuration properties may require attention.
  Resolution: Fixed

> Set values in `conf/hbase-site.xml` that enable running on `LocalFileSystem` 
> out of the box
> ---
>
> Key: HBASE-24271
> URL: https://issues.apache.org/jira/browse/HBASE-24271
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0, 2.2.4
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 1.7.0, 2.2.5
>
>
> This ticket is to implement the changes as described on the [discussion on 
> dev|https://lists.apache.org/thread.html/r089de243a9bc9d923fa07c81e6bc825b82be68f567b892a342a0c61f%40%3Cdev.hbase.apache.org%3E].
>  It reverts and supersedes changes made on HBASE-24086 and HBASE-24106.
> {quote}
> The conclusion I understand from this thread looks something like this:
> 1. revert HBASE-24086, make it so that running on `LocalFileSystem` is a 
> fatal condition with default configs.
> 2. ship a conf/hbase-site.xml that contains 
> hbase.unsafe.stream.capability.enforce=false, along with a big comment saying 
> this is not safe for production.
> 3. ship a conf/hbase-site.xml that contains hbase.tmp.dir=./tmp, along with a 
> comment saying herein you'll find temporary and persistent data, reconfigure 
> for production with hbase.rootdir pointed to a durable filesystem that 
> supports our required stream capabilities (see above).
> 4. update HBASE-24106 as appropriate.
> Neither 2 nor 3 are suitable for production deployments, thus the changes do 
> not go into hbase-default.xml. Anyone standing up a production deploy must 
> edit hbase-site.xml anyway, so this doesn't change anything. It also restores 
> our "simple" first-time user experience of not needing to run anything 
> besides `bin/start-hbase.sh` (or `bin/hbase master start`, or whatever it is 
> we're telling people these days).
> We can reassess this once more when a durable equivalent to LocalFileSystem 
> comes along.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-9528) Adaptive compaction

2020-05-14 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-9528.
--
Resolution: Later

Resolving old issue as later.

> Adaptive compaction
> ---
>
> Key: HBASE-9528
> URL: https://issues.apache.org/jira/browse/HBASE-9528
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Priority: Major
>
> Currently, the compaction policy granularity is based on single machine. we 
> had a thought that introduce a new cluster granularity decision, such that we 
> could improve those case per cluster running status:
> 1) many nodes are compacting aggressive, we call it cluster compaction storm, 
> we should throttle it.
> 2) do more compaction if low traffic in current cluster(similar with off-peak 
> feature), not limit by config timerange(like off-peak timerange), just 
> trigger by load or qps or other stuff.
> comments? thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24086) Disable output stream capability enforcement when running in standalone mode

2020-05-14 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-24086.
--
Fix Version/s: (was: 2.2.5)
   (was: 1.7.0)
   (was: 2.3.0)
   (was: 3.0.0-alpha-1)
 Release Note:   (was: In the presence of an instance of `LocalFileSystem` 
used for a WAL, HBase will degrade to NOT enforcing unsafe stream capabilities. 
A warning log message is generated each time this occurs.)
 Assignee: (was: Nick Dimiduk)
   Resolution: Won't Fix

Reverted from all branches. Superseded by HBASE-24271.

> Disable output stream capability enforcement when running in standalone mode
> 
>
> Key: HBASE-24086
> URL: https://issues.apache.org/jira/browse/HBASE-24086
> Project: HBase
>  Issue Type: Task
>  Components: master, Operability
>Affects Versions: 3.0.0-alpha-1, 2.3.0
>Reporter: Nick Dimiduk
>Priority: Critical
>
> {noformat}
> $ 
> JAVA_HOME=/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home 
> mvn clean install -DskipTests
> $ 
> JAVA_HOME=/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home 
> ./bin/hbase master start
> {noformat}
> gives
> {noformat}
> 2020-03-30 17:12:43,857 ERROR 
> [master/192.168.111.13:16000:becomeActiveMaster] master.HMaster: Failed to 
> become active master  
>  
> java.io.IOException: cannot get log writer
>   
> 
> at 
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:118)
>   
>   
> at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(AsyncFSWAL.java:704)
>   
>  
> at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:710)
>   
>   
> at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:128)
>   
>   
> at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:839)
>   
>   
> at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:549)
>   
>   
> at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:490)
>   
> 
> at 
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:156)
>   
>
> at 
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:61)
>   
> 
> at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:297) 
>   
> 
> at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.createWAL(RegionProcedureStore.java:256)
>   
>   
> at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.bootstrap(RegionProcedureStore.java:273)
>   
>   
> at 
> org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.recoverLease(RegionProcedureStore.java:482)
>   
>
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.init(ProcedureExecutor.java:587)
>   
>   
> at 
> org.apache.hadoop.hbase.master.HMaster.createProcedureExecutor(HMaster.jav

[jira] [Resolved] (HBASE-14992) Add cache stats of past n periods in region server status page

2020-05-14 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-14992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-14992.
---
Resolution: Later

Closing old JIRA

> Add cache stats of past n periods in region server status page
> --
>
> Key: HBASE-14992
> URL: https://issues.apache.org/jira/browse/HBASE-14992
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache, metrics
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Priority: Minor
>
> The cache stats of past n periods, such as SumHitCountsPastNPeriods, 
> SumHitCachingCountsPastNPeriods, etc, 
> are useful to indicate the real-time read load of region server, especially 
> for temporary read peak. It is helpful to add such metrics to 
> BlockCache#Stats tab of region server status page. Discussion and suggestion 
> are welcomed.   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24106) Update getting started documentation after HBASE-24086

2020-05-14 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-24106.
--
Resolution: Won't Fix

Original commit reverted, superseded by HBASE-24271.

> Update getting started documentation after HBASE-24086
> --
>
> Key: HBASE-24106
> URL: https://issues.apache.org/jira/browse/HBASE-24106
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Nick Dimiduk
>Priority: Major
>
> HBASE-24086 allows HBase to degrade gracefully to running on a 
> {{LocalFileSystem}} without further user configuration. Update the docs 
> accordingly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24368) Let HBCKSCP clear 'Unknown Servers', even if RegionStateNode has RegionLocation == null

2020-05-14 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-24368.
---
Fix Version/s: 2.3.0
   3.0.0-alpha-1
 Hadoop Flags: Reviewed
 Assignee: Michael Stack
   Resolution: Fixed

Pushed to branch-2.3+. Thanks for review [~huaxiangsun]

> Let HBCKSCP clear 'Unknown Servers', even if RegionStateNode has 
> RegionLocation == null
> ---
>
> Key: HBASE-24368
> URL: https://issues.apache.org/jira/browse/HBASE-24368
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Affects Versions: 2.3.0
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> This is an incidental noticed when in a hole trying to fix up a cluster. The 
> 'obvious' remediation didn't work. This issue is about addressing this.
> HBASE-23594 added a filtering of Regions on the crashed server to handle the 
> case where an Assign may be concurrent to the ServerCrashProcedure. To avoid 
> double assign, the SCP will skip assign if the RegionStateNode RegionLocation 
> is not that of the crashed server.
> This is good.
> Where it is an obstacle is when a Region is stuck in OPENING state, it 
> references an 'Unknown Server' -- a server no longer tracked by the Master -- 
> and there is no assign currently in flight. In this case, scheduling a 
> ServerCrashProcedure to clean up the reference to the Unknown Server and to 
> get the Region reassigned skips out when RegionStateNode in Master has a 
> RegionLocation that does not match that of the ServerCrashProcedure, even 
> when it is set to null (we set the RegionLocation to null when we fail an 
> assign as we might if the server no longer is part of the cluster).
> For background, cluster had a RIT. The RIT was a Region failing to open 
> because of a missing Reference (Another issue). The Region open would fail 
> with a FileNotFoundException. The master would attempt assign and then would 
> fail when it went to confirm OPEN, logging the complaint about FNFE asking 
> for operator intervention in master logs.
> This state was in place for weeks on this particular cluster (a dev cluster 
> not under close observation). The cluster had been restarted once or twice so 
> the server the Region had once been on was no longer 'known' but it still had 
> an entry in the hbase:meta table as last location assigned (The now 'Unknown 
> Server').
> To fix, I went about the task in the wrong order. I bypassed the long-running 
> stuck procedure to terminate it and cleanup 'Procedures and Locks'. Mistake. 
> Now there was no longer an assign Procedure for this Region. But I now had a 
> Region in OPENING state with a reference to an unknown server with an 
> in-memory RegionStateNode whose RegionLocation was null (set null on each 
> failed assign). Running catalogjanitor_run and hbck_chore_report had the 
> unknown server show in the 'HBCK Report' in the 'Unknown Servers' list. 
> Attempts at assign fail because Region is in OPENING state -- you can't 
> assign a Region in OPENING state. Scheduling an HBCKSCP via hbck2 
> scheduleRecoveries always generated the below in the logs.
> {code}
> org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: pid=157217, 
> state=RUNNABLE:SERVER_CRASH_ASSIGN, locked=true; HBCKServerCrashProcedure 
> server=unknown_server.example.com,16020,1587577972683, splitWal=true, 
> meta=false found a region state=OPENING, location=null, 
> table=bobby_analytics, region=1501ea3bd822c1a3e4e6216ea48733bd which is no 
> longer on us unknown_server.example.com,16020,1587577972683, give up 
> assigning...
> {code}
> My workaround was setting region state to CLOSED with hbck2 and then doing an 
> assign with hbck2. At this point I noticed the FNFE. Easier if the HBCKSCP 
> worked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Row size analyzer in HBase

2020-05-14 Thread Stack
On Thu, May 14, 2020 at 12:23 PM Andrew Purtell  wrote:

> HBase doesn't care about the cross HFile "row" concept in the same way that
> Phoenix does.
>
> As discussed earlier in this thread, during compaction we would call the
> sketch update function while processing cells in the HFile, and store the
> result into the HFile trailer. That's it.
>
>
Esteban opened this a good while back:
https://issues.apache.org/jira/browse/HBASE-17756. I like his idea of
dumping the stats with pretty printer per hfile. Let me give it a go...
S




> For Phoenix, because it cares about rows, something has to collect all
> cells for a given row across all files in the store, and hand the complete
> row to the sketch update function, and storing sketches in HFiles no longer
> makes sense, because the sketch is of data that lives in rows that span
> multiple HFiles. So you should take this to dev@phoenix, probably.
>
>
>
>
>
>
> On Thu, May 14, 2020 at 10:35 AM Sukumar Maddineni
>  wrote:
>
> > Hi Stack,
> >
> > Thanks for that pointer, I am not aware of sketches(one more concept to
> > learn :)). I will explore and see if this helps.
> >
> > Hi Andrew,
> >
> > Yes, this is needed for a Phoenix table but there are two asks. one is
> from
> > customer side who wants to know the size of their actual rows which is
> > equal to the sum of the size of all columns latest version(there might
> > extra versions or delete markers which might not be something customer
> > interested since they don't read that data) and second ask is from
> service
> > owner point of view where we want to know the size of full row including
> > all cells, this is needed for internal operations like backups,
> migrations,
> > growth analysis, stats.  If we have something at HBase level then coming
> up
> > with a similar one for Phoenix table seems to be not that of a big job(I
> > might be wrong).
> >
> >
> > Thanks
> > Sukumar
> >
> >
> >
> > On Thu, May 14, 2020 at 10:11 AM Andrew Purtell 
> > wrote:
> >
> > > > I keep thinking about inlining this stuff at flush/compaction time
> and
> > > appending the sketch to an hfile. After the fact you could read the
> > > sketches in the tail of the hfiles for some counts on a Region basis
> but
> > it
> > > wouldn't be row-based.
> > >
> > > There should be an issue for this if not one already (I've heard it
> > > mentioned before). It would be a very nice to have. Wasn't the sketch
> > stuff
> > > from Yahoo incubated? ... Yes: https://datasketches.apache.org/ ,
> > > https://incubator.apache.org/clutch/datasketches.html . There's
> > something
> > > in the family to try, so to speak.
> > >
> > > The row vs cell distinction is an important one. If you are looking to
> > add
> > > or use something provided by HBase, the view of the data will be cell
> > > based. That might be what you need, it might not be. Table level
> > statistics
> > > (aggregated from region sketches as stack suggests) would roll up
> either
> > > cells or rows so could work if that's the granularity you need.
> > >
> > > If the ask is for row based statistics for Phoenix, this is a question
> > > better asked on dev@phoenix.
> > >
> > >
> > > On Thu, May 14, 2020 at 9:19 AM Stack  wrote:
> > >
> > > > On Wed, May 13, 2020 at 10:38 PM Sukumar Maddineni
> > > >  wrote:
> > > >
> > > > > Hello everyone,
> > > > >
> > > > > Is there any existing tool which we can use to understand the size
> of
> > > the
> > > > > rows in a table.  Like we want to know what is p90, max row size of
> > > rows
> > > > in
> > > > > a given table to understand the usage pattern and see how much room
> > we
> > > > have
> > > > > before having large rows.
> > > > >
> > > > > I was thinking similar to RowCounter with reducer to consolidate
> the
> > > > info.
> > > > >
> > > > >
> > > > I've had some success scanning rows on a per-Region basis dumping a
> > > report
> > > > per Region. I was passing the per row Results via something like the
> > > below:
> > > >
> > > >static void processRowResult(Result result, Sketches sketches) {
> > > >  // System.out.println(result.toString());
> > > >  long rowSize = 0;
> > > >  int columnCount = 0;
> > > >  for (Cell cell : result.rawCells()) {
> > > >rowSize += estimatedSizeOfCell(cell);
> > > >columnCount += 1;
> > > >  }
> > > >  sketches.rowSizeSketch.update(rowSize);
> > > >  sketches.columnCountSketch.update(columnCount);
> > > >}
> > > >
> > > > ... where the sketches are variants of
> > > > com.yahoo.sketches.quantiles.*Sketch. The latter are nice in that the
> > > > sketches can be aggregated so you can after-the-fact make table
> > sketches
> > > by
> > > > summing all of the Region sketches. I had a 100 quantiles so could do
> > 95%
> > > > or 96%, etc. The bins to use for say data size take a bit of tuning
> but
> > > can
> > > > make a decent guess for first go round and see how you do.
> > > >
> > > > I keep thinking about inlining this stuff at flush/compac

Re: Row size analyzer in HBase

2020-05-14 Thread Andrew Purtell
HBase doesn't care about the cross HFile "row" concept in the same way that
Phoenix does.

As discussed earlier in this thread, during compaction we would call the
sketch update function while processing cells in the HFile, and store the
result into the HFile trailer. That's it.

For Phoenix, because it cares about rows, something has to collect all
cells for a given row across all files in the store, and hand the complete
row to the sketch update function, and storing sketches in HFiles no longer
makes sense, because the sketch is of data that lives in rows that span
multiple HFiles. So you should take this to dev@phoenix, probably.






On Thu, May 14, 2020 at 10:35 AM Sukumar Maddineni
 wrote:

> Hi Stack,
>
> Thanks for that pointer, I am not aware of sketches(one more concept to
> learn :)). I will explore and see if this helps.
>
> Hi Andrew,
>
> Yes, this is needed for a Phoenix table but there are two asks. one is from
> customer side who wants to know the size of their actual rows which is
> equal to the sum of the size of all columns latest version(there might
> extra versions or delete markers which might not be something customer
> interested since they don't read that data) and second ask is from service
> owner point of view where we want to know the size of full row including
> all cells, this is needed for internal operations like backups, migrations,
> growth analysis, stats.  If we have something at HBase level then coming up
> with a similar one for Phoenix table seems to be not that of a big job(I
> might be wrong).
>
>
> Thanks
> Sukumar
>
>
>
> On Thu, May 14, 2020 at 10:11 AM Andrew Purtell 
> wrote:
>
> > > I keep thinking about inlining this stuff at flush/compaction time and
> > appending the sketch to an hfile. After the fact you could read the
> > sketches in the tail of the hfiles for some counts on a Region basis but
> it
> > wouldn't be row-based.
> >
> > There should be an issue for this if not one already (I've heard it
> > mentioned before). It would be a very nice to have. Wasn't the sketch
> stuff
> > from Yahoo incubated? ... Yes: https://datasketches.apache.org/ ,
> > https://incubator.apache.org/clutch/datasketches.html . There's
> something
> > in the family to try, so to speak.
> >
> > The row vs cell distinction is an important one. If you are looking to
> add
> > or use something provided by HBase, the view of the data will be cell
> > based. That might be what you need, it might not be. Table level
> statistics
> > (aggregated from region sketches as stack suggests) would roll up either
> > cells or rows so could work if that's the granularity you need.
> >
> > If the ask is for row based statistics for Phoenix, this is a question
> > better asked on dev@phoenix.
> >
> >
> > On Thu, May 14, 2020 at 9:19 AM Stack  wrote:
> >
> > > On Wed, May 13, 2020 at 10:38 PM Sukumar Maddineni
> > >  wrote:
> > >
> > > > Hello everyone,
> > > >
> > > > Is there any existing tool which we can use to understand the size of
> > the
> > > > rows in a table.  Like we want to know what is p90, max row size of
> > rows
> > > in
> > > > a given table to understand the usage pattern and see how much room
> we
> > > have
> > > > before having large rows.
> > > >
> > > > I was thinking similar to RowCounter with reducer to consolidate the
> > > info.
> > > >
> > > >
> > > I've had some success scanning rows on a per-Region basis dumping a
> > report
> > > per Region. I was passing the per row Results via something like the
> > below:
> > >
> > >static void processRowResult(Result result, Sketches sketches) {
> > >  // System.out.println(result.toString());
> > >  long rowSize = 0;
> > >  int columnCount = 0;
> > >  for (Cell cell : result.rawCells()) {
> > >rowSize += estimatedSizeOfCell(cell);
> > >columnCount += 1;
> > >  }
> > >  sketches.rowSizeSketch.update(rowSize);
> > >  sketches.columnCountSketch.update(columnCount);
> > >}
> > >
> > > ... where the sketches are variants of
> > > com.yahoo.sketches.quantiles.*Sketch. The latter are nice in that the
> > > sketches can be aggregated so you can after-the-fact make table
> sketches
> > by
> > > summing all of the Region sketches. I had a 100 quantiles so could do
> 95%
> > > or 96%, etc. The bins to use for say data size take a bit of tuning but
> > can
> > > make a decent guess for first go round and see how you do.
> > >
> > > I keep thinking about inlining this stuff at flush/compaction time and
> > > appending the sketch to an hfile. After the fact you could read the
> > > sketches in the tail of the hfiles for some counts on a Region basis
> but
> > it
> > > wouldn't be row-based. For row-based, you'd have to read Rows (hfiles
> are
> > > buckets of Cells, not rows).
> > >
> > > S
> > >
> > >
> > >
> > > >
> > > > --
> > > > Sukumar
> > > >
> > > > <
> https://smart.salesforce.com/sig/smaddineni//us_mb/default/link.html>
> > > >
> > >
> >
> >
> > --
> > Best regards,
> > Andr

[jira] [Resolved] (HBASE-24366) Document how to move WebUI access log entries to a separate log file

2020-05-14 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-24366.
--
Resolution: Duplicate

Indeed. Thanks [~zhangduo].

> Document how to move WebUI access log entries to a separate log file
> 
>
> Key: HBASE-24366
> URL: https://issues.apache.org/jira/browse/HBASE-24366
> Project: HBase
>  Issue Type: Task
>  Components: master, regionserver
>Affects Versions: 2.3.0
>Reporter: Nick Dimiduk
>Priority: Major
>
> I've noticed that after a recent commit, we now have webui access log lines 
> going into our service log file. The log entires are going to a logger called 
> {{http.requests.regionserver}}, and after the preamble of timestamp, log 
> level, logger, they appear to be conformant to the 
> [CLF|https://en.wikipedia.org/wiki/Common_Log_Format] specification. Tools 
> designed for parsing http logs usually expect to have just the CLF entries, 
> and not need preprocessing.
> We should document how to configure the service to log these entries into a 
> separate log file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24350) HBase table level replication metrics for shippedBytes are always 0

2020-05-14 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell resolved HBASE-24350.
-
Fix Version/s: 2.4.0
   1.7.0
   3.0.0-alpha-1
 Hadoop Flags: Reviewed
   Resolution: Fixed

> HBase table level replication metrics for shippedBytes are always 0
> ---
>
> Key: HBASE-24350
> URL: https://issues.apache.org/jira/browse/HBASE-24350
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0-alpha-1, master, 1.7.0, 2.4.0
>Reporter: Sandeep Pal
>Assignee: Sandeep Pal
>Priority: Major
> Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0
>
>
> It was observed during some investigations that table level metrics for 
> shippedBytes are always 0 consistently even though data is getting shipped.
> There are two problems with table-level metrics:
>  # There are no table-level metrics for shipped bytes.
>  # Another problem is that it's using `MetricsReplicationSourceSourceImpl` 
> which is creating all source-level metrics at table level as well but updated 
> only ageOfLastShippedOp. This reports lot of false/incorrect replication 
> metrics at table level. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Row size analyzer in HBase

2020-05-14 Thread Sukumar Maddineni
Hi Stack,

Thanks for that pointer, I am not aware of sketches(one more concept to
learn :)). I will explore and see if this helps.

Hi Andrew,

Yes, this is needed for a Phoenix table but there are two asks. one is from
customer side who wants to know the size of their actual rows which is
equal to the sum of the size of all columns latest version(there might
extra versions or delete markers which might not be something customer
interested since they don't read that data) and second ask is from service
owner point of view where we want to know the size of full row including
all cells, this is needed for internal operations like backups, migrations,
growth analysis, stats.  If we have something at HBase level then coming up
with a similar one for Phoenix table seems to be not that of a big job(I
might be wrong).


Thanks
Sukumar



On Thu, May 14, 2020 at 10:11 AM Andrew Purtell  wrote:

> > I keep thinking about inlining this stuff at flush/compaction time and
> appending the sketch to an hfile. After the fact you could read the
> sketches in the tail of the hfiles for some counts on a Region basis but it
> wouldn't be row-based.
>
> There should be an issue for this if not one already (I've heard it
> mentioned before). It would be a very nice to have. Wasn't the sketch stuff
> from Yahoo incubated? ... Yes: https://datasketches.apache.org/ ,
> https://incubator.apache.org/clutch/datasketches.html . There's something
> in the family to try, so to speak.
>
> The row vs cell distinction is an important one. If you are looking to add
> or use something provided by HBase, the view of the data will be cell
> based. That might be what you need, it might not be. Table level statistics
> (aggregated from region sketches as stack suggests) would roll up either
> cells or rows so could work if that's the granularity you need.
>
> If the ask is for row based statistics for Phoenix, this is a question
> better asked on dev@phoenix.
>
>
> On Thu, May 14, 2020 at 9:19 AM Stack  wrote:
>
> > On Wed, May 13, 2020 at 10:38 PM Sukumar Maddineni
> >  wrote:
> >
> > > Hello everyone,
> > >
> > > Is there any existing tool which we can use to understand the size of
> the
> > > rows in a table.  Like we want to know what is p90, max row size of
> rows
> > in
> > > a given table to understand the usage pattern and see how much room we
> > have
> > > before having large rows.
> > >
> > > I was thinking similar to RowCounter with reducer to consolidate the
> > info.
> > >
> > >
> > I've had some success scanning rows on a per-Region basis dumping a
> report
> > per Region. I was passing the per row Results via something like the
> below:
> >
> >static void processRowResult(Result result, Sketches sketches) {
> >  // System.out.println(result.toString());
> >  long rowSize = 0;
> >  int columnCount = 0;
> >  for (Cell cell : result.rawCells()) {
> >rowSize += estimatedSizeOfCell(cell);
> >columnCount += 1;
> >  }
> >  sketches.rowSizeSketch.update(rowSize);
> >  sketches.columnCountSketch.update(columnCount);
> >}
> >
> > ... where the sketches are variants of
> > com.yahoo.sketches.quantiles.*Sketch. The latter are nice in that the
> > sketches can be aggregated so you can after-the-fact make table sketches
> by
> > summing all of the Region sketches. I had a 100 quantiles so could do 95%
> > or 96%, etc. The bins to use for say data size take a bit of tuning but
> can
> > make a decent guess for first go round and see how you do.
> >
> > I keep thinking about inlining this stuff at flush/compaction time and
> > appending the sketch to an hfile. After the fact you could read the
> > sketches in the tail of the hfiles for some counts on a Region basis but
> it
> > wouldn't be row-based. For row-based, you'd have to read Rows (hfiles are
> > buckets of Cells, not rows).
> >
> > S
> >
> >
> >
> > >
> > > --
> > > Sukumar
> > >
> > > 
> > >
> >
>
>
> --
> Best regards,
> Andrew
>
> Words like orphans lost among the crosstalk, meaning torn from truth's
> decrepit hands
>- A23, Crosstalk
>


-- 




Re: PR linking broken...again

2020-05-14 Thread Bharath Vissapragada
The issue has been fixed. I can see notifications for new PRs and also they
are being linked to their jiras.

On Wed, May 13, 2020 at 12:21 PM Bharath Vissapragada 
wrote:

> Seems like an infra issue
>  across multiple
> projects. From what I can tell, email notifications for new PRs are also
> not working from the past two days.
>
> Please make sure to link the PRs manually until it is fixed and add
> specific reviewers if you'd like to get your PR noticed (or send out an
> email if urgent).
>
> Just FYI.
>


Re: Row size analyzer in HBase

2020-05-14 Thread Andrew Purtell
> I keep thinking about inlining this stuff at flush/compaction time and
appending the sketch to an hfile. After the fact you could read the
sketches in the tail of the hfiles for some counts on a Region basis but it
wouldn't be row-based.

There should be an issue for this if not one already (I've heard it
mentioned before). It would be a very nice to have. Wasn't the sketch stuff
from Yahoo incubated? ... Yes: https://datasketches.apache.org/ ,
https://incubator.apache.org/clutch/datasketches.html . There's something
in the family to try, so to speak.

The row vs cell distinction is an important one. If you are looking to add
or use something provided by HBase, the view of the data will be cell
based. That might be what you need, it might not be. Table level statistics
(aggregated from region sketches as stack suggests) would roll up either
cells or rows so could work if that's the granularity you need.

If the ask is for row based statistics for Phoenix, this is a question
better asked on dev@phoenix.


On Thu, May 14, 2020 at 9:19 AM Stack  wrote:

> On Wed, May 13, 2020 at 10:38 PM Sukumar Maddineni
>  wrote:
>
> > Hello everyone,
> >
> > Is there any existing tool which we can use to understand the size of the
> > rows in a table.  Like we want to know what is p90, max row size of rows
> in
> > a given table to understand the usage pattern and see how much room we
> have
> > before having large rows.
> >
> > I was thinking similar to RowCounter with reducer to consolidate the
> info.
> >
> >
> I've had some success scanning rows on a per-Region basis dumping a report
> per Region. I was passing the per row Results via something like the below:
>
>static void processRowResult(Result result, Sketches sketches) {
>  // System.out.println(result.toString());
>  long rowSize = 0;
>  int columnCount = 0;
>  for (Cell cell : result.rawCells()) {
>rowSize += estimatedSizeOfCell(cell);
>columnCount += 1;
>  }
>  sketches.rowSizeSketch.update(rowSize);
>  sketches.columnCountSketch.update(columnCount);
>}
>
> ... where the sketches are variants of
> com.yahoo.sketches.quantiles.*Sketch. The latter are nice in that the
> sketches can be aggregated so you can after-the-fact make table sketches by
> summing all of the Region sketches. I had a 100 quantiles so could do 95%
> or 96%, etc. The bins to use for say data size take a bit of tuning but can
> make a decent guess for first go round and see how you do.
>
> I keep thinking about inlining this stuff at flush/compaction time and
> appending the sketch to an hfile. After the fact you could read the
> sketches in the tail of the hfiles for some counts on a Region basis but it
> wouldn't be row-based. For row-based, you'd have to read Rows (hfiles are
> buckets of Cells, not rows).
>
> S
>
>
>
> >
> > --
> > Sukumar
> >
> > 
> >
>


-- 
Best regards,
Andrew

Words like orphans lost among the crosstalk, meaning torn from truth's
decrepit hands
   - A23, Crosstalk


[jira] [Resolved] (HBASE-23832) Old config hbase.hstore.compactionThreshold is ignored

2020-05-14 Thread Sambit Mohapatra (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sambit Mohapatra resolved HBASE-23832.
--
Hadoop Flags: Reviewed
  Resolution: Fixed

> Old config hbase.hstore.compactionThreshold is ignored
> --
>
> Key: HBASE-23832
> URL: https://issues.apache.org/jira/browse/HBASE-23832
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Sambit Mohapatra
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.1.10, 2.2.5
>
>
> In 2.x we added new name 'hbase.hstore.compaction.min' for this.  Still for 
> compatibility we allow the old config name and honor that in code
> {code}
> minFilesToCompact = Math.max(2, conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY,
>   /*old name*/ conf.getInt("hbase.hstore.compactionThreshold", 3)));
> {code}
> But if hbase.hstore.compactionThreshold alone is configured by user, there is 
> no impact of that.
> This is because in hbase-default.xml we have the new config with a value of 
> 3. So the call conf.getInt(HBASE_HSTORE_COMPACTION_MIN_KEY) always return a 
> value 3 even if it is not explicitly configured by customer and instead used 
> the old key.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Row size analyzer in HBase

2020-05-14 Thread Stack
On Wed, May 13, 2020 at 10:38 PM Sukumar Maddineni
 wrote:

> Hello everyone,
>
> Is there any existing tool which we can use to understand the size of the
> rows in a table.  Like we want to know what is p90, max row size of rows in
> a given table to understand the usage pattern and see how much room we have
> before having large rows.
>
> I was thinking similar to RowCounter with reducer to consolidate the info.
>
>
I've had some success scanning rows on a per-Region basis dumping a report
per Region. I was passing the per row Results via something like the below:

   static void processRowResult(Result result, Sketches sketches) {
 // System.out.println(result.toString());
 long rowSize = 0;
 int columnCount = 0;
 for (Cell cell : result.rawCells()) {
   rowSize += estimatedSizeOfCell(cell);
   columnCount += 1;
 }
 sketches.rowSizeSketch.update(rowSize);
 sketches.columnCountSketch.update(columnCount);
   }

... where the sketches are variants of
com.yahoo.sketches.quantiles.*Sketch. The latter are nice in that the
sketches can be aggregated so you can after-the-fact make table sketches by
summing all of the Region sketches. I had a 100 quantiles so could do 95%
or 96%, etc. The bins to use for say data size take a bit of tuning but can
make a decent guess for first go round and see how you do.

I keep thinking about inlining this stuff at flush/compaction time and
appending the sketch to an hfile. After the fact you could read the
sketches in the tail of the hfiles for some counts on a Region basis but it
wouldn't be row-based. For row-based, you'd have to read Rows (hfiles are
buckets of Cells, not rows).

S



>
> --
> Sukumar
>
> 
>


Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang

2020-05-14 Thread Guangxu Cheng
Congratulations and welcome Wei-Chiu !!!
--
Best Regards,
Guangxu


Reid Chan  于2020年5月14日周四 下午6:59写道:

>
> Congratulations and welcome!
>
>
> --
>
> Best regards,
> R.C
>
>
>
> 
> From: ramkrishna vasudevan 
> Sent: 14 May 2020 13:42
> To: dev
> Subject: Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang
>
> Congratulations Wei-Chiu !!!
>
> Regards
> Ram
>
> On Thu, May 14, 2020 at 10:55 AM Viraj Jasani  wrote:
>
> > Congratulations Wei-Chiu !!
> >
> > On 2020/05/13 19:12:38, Sean Busbey  wrote:
> > > Folks,
> > >
> > > On behalf of the Apache HBase PMC I am pleased to announce that
> Wei-Chiu
> > > Chuang has accepted the PMC's invitation to become a committer on the
> > > project.
> > >
> > > We appreciate all of the great contributions Wei-Chiu has made to the
> > > community thus far and we look forward to his continued involvement.
> > >
> > > Allow me to be the first to congratulate Wei-Chiu on his new role!
> > >
> > > thanks,
> > > busbey
> > >
> >
>


Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang

2020-05-14 Thread Reid Chan


Congratulations and welcome!


--

Best regards,
R.C




From: ramkrishna vasudevan 
Sent: 14 May 2020 13:42
To: dev
Subject: Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang

Congratulations Wei-Chiu !!!

Regards
Ram

On Thu, May 14, 2020 at 10:55 AM Viraj Jasani  wrote:

> Congratulations Wei-Chiu !!
>
> On 2020/05/13 19:12:38, Sean Busbey  wrote:
> > Folks,
> >
> > On behalf of the Apache HBase PMC I am pleased to announce that Wei-Chiu
> > Chuang has accepted the PMC's invitation to become a committer on the
> > project.
> >
> > We appreciate all of the great contributions Wei-Chiu has made to the
> > community thus far and we look forward to his continued involvement.
> >
> > Allow me to be the first to congratulate Wei-Chiu on his new role!
> >
> > thanks,
> > busbey
> >
>


[jira] [Resolved] (HBASE-24243) Unable to start HRegionserver and Master node considers as a dead region

2020-05-14 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil resolved HBASE-24243.
--
Resolution: Invalid

Please submit such enquiries to the users list (u...@hbase.apache.org). Jira 
should be only used for tracking work on hbase project, not general discussions.

> Unable to start HRegionserver and Master node considers as a dead region
> 
>
> Key: HBASE-24243
> URL: https://issues.apache.org/jira/browse/HBASE-24243
> Project: HBase
>  Issue Type: Brainstorming
>  Components: regionserver
>Reporter: Dinesh Nithyanandam
>Priority: Blocker
> Attachments: site.xml
>
>
> Hi Team,
> I am currently using Apache Hbase version - 1.3.6 and I am trying to run 
> Master and region server separately and then join the cluster dynamically but 
> it was region server was not starting and hangs at "*The RegionServer is 
> initializing*!"
> Commands used as below: (Master and region are on separate nodes )
> Node A - Hbase Master - */opt/hbase/bin/hbase-daemon.sh --config 
> /usr/local/bin/hbase/conf start master*
> Node B - Hbase Region - */opt/hbase/bin/hbase-daemon.sh --config 
> /usr/local/bin/hbase/conf start regionserver*
> *{color:#ff}Please advice If the above command is the right way to start 
> hbase master and region{color}*
> Environment - *Google Compute Engine (GCE) Instance groups/VM's*
> OS Type - *CentOS -7*
> Master running ports *- 16000.tcp 16010/web* 
> Region server running ports *- 16020/tcp* *16030/web*
> Also not sure on how to enable reverse DNS across both the machines and 
> whether that is the problem and please do advice on how do i achieve it
> *Master logs:*
> From the below master logs it clearly says that master is trying to connect 
> to region and then eventually getting disconnected from the client region 
> server 
>  * *{color:#ff}"{color}{color:#ff}*DEBUG 
> [RpcServer.reader=1,bindAddress=pinpoint-master-v000-rh5k.c.gcp-ushi-telemetry-npe.internal,port=16000]
>  ipc.RpcServer: RpcServer.listener,port=16000: DISCONNECTING client 
> 10.148.6.13:45732 because read count=-1. Number of active connections: 
> 1*{color}"*
> *complete logs*
> 2020-04-22 19:38:24,812 DEBUG [RpcServer.listener,port=16000] ipc.RpcServer: 
> RpcServer.listener,port=16000: connection from 10.148.6.13:45732; # active 
> connections: 1
>  2020-04-22 19:38:24,961 DEBUG 
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=16000] ipc.RpcServer: 
> RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=16000: callId: 0 service: 
> RegionServerStatusService methodName: RegionServerStartup size: 47 
> connection: 10.148.6.13:45732
>  2020-04-22 19:38:30,591 DEBUG 
> [*pinpoint-master-v000-rh5k:16000*.activeMasterManager] ipc.RpcClientImpl: 
> Connecting to 
> *pinpoint-r-v000-976s.c.gcp-ushi-telemetry-npe.internal/10.148.6.13:16020*
>  2020-04-22 19:38:31,268 *DEBUG [hconnection-0x5f02b9cb-shared--pool3-t1] 
> ipc.RpcClientImpl: Connecting to 
> pinpoint-r-v000-976s.c.gcp-ushi-telemetry-npe.internal/10.148.6.13:16020*
>  2020-04-22 19:38:31,478 DEBUG [ProcedureExecutor-3] ipc.RpcClientImpl: 
> Connecting to 
> pinpoint-r-v000-976s.c.gcp-ushi-telemetry-npe.internal/10.148.6.13:16020
>  2020-04-22 19:39:32,714 *DEBUG 
> [RpcServer.reader=1,bindAddress=pinpoint-master-v000-rh5k.c.gcp-ushi-telemetry-npe.internal,port=16000]
>  ipc.RpcServer: RpcServer.listener,port=16000: DISCONNECTING client 
> 10.148.6.13:45732 because read count=-1. Number of active connections: 1*
>  
> *Region server logs:*
> From the below logs region server discovers the master on it's own but unable 
> to join the cluster with below logs
> ===
>  
> *{color:#ff}2020-04-22 19:38:24,675 INFO 
> [regionserver/pinpoint-r-v000-976s.c.gcp-ushi-telemetry-npe.internal/10.148.6.13:16020]
>  regionserver.HRegionServer: reportForDuty to 
> master=pinpoint-master-v000-rh5k.c.gcp-ushi-telemetry-npe.internal,16000{color}*,1587584303253
>  with port=16020, startcode=1587583634667
>  2020-04-22 19:38:24,801 DEBUG 
> [regionserver/pinpoint-r-v000-976s.c.gcp-ushi-telemetry-npe.internal/10.148.6.13:16020]
>  ipc.RpcClientImpl: Connecting to 
> pinpoint-master-v000-rh5k.c.gcp-ushi-telemetry-npe.internal/10.148.6.154:16000
>  2020-04-22 19:38:28,005 INFO 
> [regionserver/pinpoint-r-v000-976s.c.gcp-ushi-telemetry-npe.internal/10.148.6.13:16020]
>  regionserver.HRegionServer: reportForDuty to 
> master=pinpoint-master-v000-rh5k.c.gcp-ushi-telemetry-npe.internal,16000,1587584303253
>  with port=16020, startcode=1587583634667
>  2020-04-22 19:38:28,033 INFO 
> [regionserver/pinpoint-r-v000-976s.c.gcp-ushi-telemetry-npe.internal/10.148.6.13:16020]
>  regionserver.HRegionServer: Config from master: 
> hbas

Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang

2020-05-14 Thread Guanghao Zhang
Congratulations and welcome Wei-Chiu!

Wellington Chevreuil  于2020年5月14日周四
下午6:01写道:

> Congratulations, Wei-Chiu! Welcome!
>
> Em qui., 14 de mai. de 2020 às 10:12, Jan Hentschel <
> jan.hentsc...@ultratendency.com> escreveu:
>
> > Congratulations Wei-Chiu and welcome!
> >
> > From: Sean Busbey 
> > Reply-To: "dev@hbase.apache.org" 
> > Date: Wednesday, May 13, 2020 at 9:10 PM
> > To: dev , Hbase-User 
> > Subject: [ANNOUNCE] New HBase committer Wei-Chiu Chuang
> >
> > Folks,
> >
> > On behalf of the Apache HBase PMC I am pleased to announce that Wei-Chiu
> > Chuang has accepted the PMC's invitation to become a committer on the
> > project.
> >
> > We appreciate all of the great contributions Wei-Chiu has made to the
> > community thus far and we look forward to his continued involvement.
> >
> > Allow me to be the first to congratulate Wei-Chiu on his new role!
> >
> > thanks,
> > busbey
> >
> >
>


Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang

2020-05-14 Thread Wellington Chevreuil
Congratulations, Wei-Chiu! Welcome!

Em qui., 14 de mai. de 2020 às 10:12, Jan Hentschel <
jan.hentsc...@ultratendency.com> escreveu:

> Congratulations Wei-Chiu and welcome!
>
> From: Sean Busbey 
> Reply-To: "dev@hbase.apache.org" 
> Date: Wednesday, May 13, 2020 at 9:10 PM
> To: dev , Hbase-User 
> Subject: [ANNOUNCE] New HBase committer Wei-Chiu Chuang
>
> Folks,
>
> On behalf of the Apache HBase PMC I am pleased to announce that Wei-Chiu
> Chuang has accepted the PMC's invitation to become a committer on the
> project.
>
> We appreciate all of the great contributions Wei-Chiu has made to the
> community thus far and we look forward to his continued involvement.
>
> Allow me to be the first to congratulate Wei-Chiu on his new role!
>
> thanks,
> busbey
>
>


Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang

2020-05-14 Thread Jan Hentschel
Congratulations Wei-Chiu and welcome!

From: Sean Busbey 
Reply-To: "dev@hbase.apache.org" 
Date: Wednesday, May 13, 2020 at 9:10 PM
To: dev , Hbase-User 
Subject: [ANNOUNCE] New HBase committer Wei-Chiu Chuang

Folks,

On behalf of the Apache HBase PMC I am pleased to announce that Wei-Chiu
Chuang has accepted the PMC's invitation to become a committer on the
project.

We appreciate all of the great contributions Wei-Chiu has made to the
community thus far and we look forward to his continued involvement.

Allow me to be the first to congratulate Wei-Chiu on his new role!

thanks,
busbey



Re: [ANNOUNCE] New HBase committer Wei-Chiu Chuang

2020-05-14 Thread Peter Somogyi
Congratulations Wei-Chiu!

On Thu, May 14, 2020 at 7:59 AM 李响  wrote:

> Congratulations Wei-Chiu!
>
> On Thu, May 14, 2020 at 9:05 AM Hui Fei  wrote:
>
> > Congratulations Wei-Chiu!
> >
> > Sean Busbey  于2020年5月14日周四 上午3:10写道:
> >
> > > Folks,
> > >
> > > On behalf of the Apache HBase PMC I am pleased to announce that
> Wei-Chiu
> > > Chuang has accepted the PMC's invitation to become a committer on the
> > > project.
> > >
> > > We appreciate all of the great contributions Wei-Chiu has made to the
> > > community thus far and we look forward to his continued involvement.
> > >
> > > Allow me to be the first to congratulate Wei-Chiu on his new role!
> > >
> > > thanks,
> > > busbey
> > >
> >
>
>
> --
>
>李响 Xiang Li
>
> 手机 cellphone :+86-136-8113-8972
> 邮件 e-mail  :wate...@gmail.com
>


[jira] [Resolved] (HBASE-24190) Make kerberos value of hbase.security.authentication property case insensitive

2020-05-14 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HBASE-24190.
--
Resolution: Fixed

Fixed Jira id issue in respective commits. Marking it resolved.

> Make kerberos value of hbase.security.authentication property case insensitive
> --
>
> Key: HBASE-24190
> URL: https://issues.apache.org/jira/browse/HBASE-24190
> Project: HBase
>  Issue Type: Bug
>Reporter: Yuanliang Zhang
>Assignee: Rushabh Shah
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 1.7.0, 2.1.10, 1.4.14, 2.2.5
>
>
> In hbase-20586 (https://issues.apache.org/jira/browse/HBASE-20586)
> (commit_sha: [https://github.com/apache/hbase/commit/cd61bcc0] )
> The code added 
> ([SyncTable.java|https://github.com/apache/hbase/commit/cd61bcc0#diff-d1b79635f33483bf6226609e91fd1cc3])
>  for the use of *hbase.security.authentication* is case-sensitive. So users 
> setting it to “KERBEROS” won’t take effect. 
>  
> {code:java}
>  private void initCredentialsForHBase(String zookeeper, Job job) throws 
> IOException {
>    Configuration peerConf = 
> HBaseConfiguration.createClusterConf(job.getConfiguration(), zookeeper);
>    if(peerConf.get("hbase.security.authentication").equals("kerberos")){
>  TableMapReduceUtil.initCredentialsForCluster(job, peerConf);    }
>  }
> {code}
>  
> However, in current code base, other uses of *hbase.security.authentication* 
> are all case-insensitive. For example in *MasterFileSystem.java.* 
>  
> {code:java}
> public MasterFileSystem(Configuration conf) throws IOException{   
>   ...   
>   this.isSecurityEnabled = 
> "kerberos".equalsIgnoreCase(conf.get("hbase.security.authentication"));  
>   ... 
> }
> {code}
>  
> The doc in GitHub repo is also misleading (Giving upper-case value).
> {quote}As a distributed database, HBase must be able to authenticate users 
> and HBase services across an untrusted network. Clients and HBase services 
> are treated equivalently in terms of authentication (and this is the only 
> time we will draw such a distinction).
> There are currently three modes of authentication which are supported by 
> HBase today via the configuration property {{hbase.security.authentication}}
> {{1.SIMPLE}}
> {{2.KERBROS}}
> {{3.TOKEN}}
> {quote}
> Users may misconfigure the parameter because of the case-senstive problem.
> *How To Fix*
> Using *eqaulsIgnoreCase* API consistently in every place when using 
> *hbase.security.authentication* or make it clear in Doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (HBASE-24164) Retain the ReadRequests and WriteRequests of region on web UI after alter table

2020-05-14 Thread Zheng Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Wang reopened HBASE-24164:


Reopened and created a new PR for v2.2.

> Retain the ReadRequests and WriteRequests of region on web UI after alter 
> table
> ---
>
> Key: HBASE-24164
> URL: https://issues.apache.org/jira/browse/HBASE-24164
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> When we alter a table, all regions of it will do a rs self move, then the 
> ReadRequests and WriteRequests will be cleared, but they are very useful 
> metrics, my propose is keep them in RegionServerAccounting on close if it is 
> a rs self move, and recover them on open.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24373) Implement JvmMetrics in HBase instead of using the one in hadoop

2020-05-14 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-24373:
-

 Summary: Implement JvmMetrics in HBase instead of using the one in 
hadoop
 Key: HBASE-24373
 URL: https://issues.apache.org/jira/browse/HBASE-24373
 Project: HBase
  Issue Type: Sub-task
  Components: logging, metrics
Reporter: Duo Zhang


The JvmMetrics from hadoop is hard coded to use log4j.

Although we do not make use of the ability, it still prevents us to ban the 
log4j dependencies completely.

So let's implement the JvmMetrics by our own, based on log4j2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24372) HBASE-20658 Release latch later for CreateTable procedure.

2020-05-14 Thread Lijin Bin (Jira)
Lijin Bin created HBASE-24372:
-

 Summary: HBASE-20658 Release latch later for CreateTable procedure.
 Key: HBASE-24372
 URL: https://issues.apache.org/jira/browse/HBASE-24372
 Project: HBase
  Issue Type: Bug
Reporter: Lijin Bin


{code}
2020-03-25 14:58:58,375 INFO  
[RpcServer.default.FPBQ.Fifo.handler=397,queue=77,port=6] master.HMaster: 
Client=hbaseadmin//10.196.142.227 create 'extra_50039', {NAME => 'extra', 
VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 
'false', KEEP_DELETED_CELLS => 'false', CACHE_DATA_ON_WRITE => 'false', 
DATA_BLOCK_ENCODING => 'DIFF', TTL => '2592000 SECONDS (30 DAYS)', MIN_VERSIONS 
=> '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 
'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', 
PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'SNAPPY', BLOCKCACHE => 
'true', BLOCKSIZE => '65536'}
2020-03-25 14:58:58,482 INFO  
[RpcServer.default.FPBQ.Fifo.handler=397,queue=77,port=6] 
rsgroup.RSGroupAdminServer: Moving table extra_50039 to RSGroup default
2020-03-25 14:58:58,485 ERROR 
[RpcServer.default.FPBQ.Fifo.handler=397,queue=77,port=6] 
master.TableStateManager: Unable to get table extra_50039 state
org.apache.hadoop.hbase.master.TableStateManager$TableStateNotFoundException: 
extra_50039
at 
org.apache.hadoop.hbase.master.TableStateManager.getTableState(TableStateManager.java:215)
at 
org.apache.hadoop.hbase.master.TableStateManager.isTableState(TableStateManager.java:147)
at 
org.apache.hadoop.hbase.master.assignment.AssignmentManager.isTableDisabled(AssignmentManager.java:388)
at 
org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveTableRegionsToGroup(RSGroupAdminServer.java:233)
at 
org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveTables(RSGroupAdminServer.java:347)
at 
org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.assignTableToGroup(RSGroupAdminEndpoint.java:456)
at 
org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.postCreateTable(RSGroupAdminEndpoint.java:479)
at 
org.apache.hadoop.hbase.master.MasterCoprocessorHost$13.call(MasterCoprocessorHost.java:350)
at 
org.apache.hadoop.hbase.master.MasterCoprocessorHost$13.call(MasterCoprocessorHost.java:347)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:551)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:625)
at 
org.apache.hadoop.hbase.master.MasterCoprocessorHost.postCreateTable(MasterCoprocessorHost.java:347)
at org.apache.hadoop.hbase.master.HMaster$4.run(HMaster.java:2048)
at 
org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:134)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2031)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:658)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
2020-03-25 14:58:58,486 INFO  
[RpcServer.default.FPBQ.Fifo.handler=397,queue=77,port=6] 
rsgroup.RSGroupAdminServer: Moving region(s) for table extra_50039 to 
RSGroup default
2020-03-25 14:58:58,486 INFO  
[RpcServer.default.FPBQ.Fifo.handler=397,queue=77,port=6] 
master.MasterRpcServices: Client=hbaseadmin//10.196.142.227 procedure request 
for creating table: namespace: "default"
qualifier: "extra_50039"
 procId is: 14019
{code}

Latch is released when execute prepareCreate, so 
MasterCoprocessorHost#postCreateTable can run and may  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24201) Fix CI builds on branch-2.2

2020-05-14 Thread Guanghao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang resolved HBASE-24201.

Resolution: Invalid

> Fix CI builds on branch-2.2
> ---
>
> Key: HBASE-24201
> URL: https://issues.apache.org/jira/browse/HBASE-24201
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.2.5
>Reporter: Nick Dimiduk
>Assignee: Guanghao Zhang
>Priority: Major
>
> From a recent [PR 
> build|https://builds.apache.org/blue/organizations/jenkins/HBase-PreCommit-GitHub-PR/detail/PR-1532/1/pipeline/]
> {noformat}
> [2020-04-16T18:43:21.548Z] Setting up ruby2.3 (2.3.3-1+deb9u7) ...
> [2020-04-16T18:43:21.548Z] Setting up ruby2.3-dev:amd64 (2.3.3-1+deb9u7) ...
> [2020-04-16T18:43:21.548Z] Setting up ruby-dev:amd64 (1:2.3.3) ...
> [2020-04-16T18:43:21.548Z] Setting up ruby (1:2.3.3) ...
> [2020-04-16T18:43:22.261Z] Processing triggers for libc-bin (2.24-11+deb9u3) 
> ...
> [2020-04-16T18:43:22.975Z] Successfully installed rake-13.0.1
> [2020-04-16T18:43:22.975Z] Building native extensions.  This could take a 
> while...
> [2020-04-16T18:43:25.277Z] ERROR:  Error installing rubocop:
> [2020-04-16T18:43:25.277Z]rubocop requires Ruby version >= 2.4.0.
> {noformat}
> Looks like the Dockerfile on branch-2.2 has bit-rot. I suspect package 
> versions are partially pinned or not pinned at all: the rubocop version has 
> incremented by ruby version has not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24371) Add more details when print CompactionConfiguration info.

2020-05-14 Thread Lijin Bin (Jira)
Lijin Bin created HBASE-24371:
-

 Summary: Add more details when print CompactionConfiguration info.
 Key: HBASE-24371
 URL: https://issues.apache.org/jira/browse/HBASE-24371
 Project: HBase
  Issue Type: Improvement
Reporter: Lijin Bin
Assignee: Lijin Bin






--
This message was sent by Atlassian Jira
(v8.3.4#803005)