Re: Split Meta Design Reset Status

2021-07-22 Thread Stack
Notes from yesterday's meeting (attendees, please amend if I misrepresent
or if you have anything extra to add!)

Split Meta Design Reset Status

Wed Jul 21 21:24:38 PDT 2021

Attendees: Bharath, Stack, Duo, and Francis

We went over the new updates to the Brainstorming [1] section under

Design in the Super Split Meta Design doc [2].

First was the new addition, 4.1.2 Extend (& Move) ConnectionRegistry; hide

ROOT from Client [3]. In particular, filling out how "ROOT" might be

implemented behind the new API in ConnectionRegistry. On option 1.,

replicating master-local Region to RegionServers, options considered

included

 * Listener on master-local Region WAL to replicate.

 * Perhaps Read-Replica but master-local is not an actual Region

 * Needs to be incremental edits because ROOT could get too big to ship

   in a lump; need to visit how...

 * Possibly in-memory-only Regions on RS replicated from master-local

   Region via WAL tailing <= zhang...@apache.org

 * Which RegionServers? Those hosting ROOT replicas?

 * How to bootstrap? Failure scenarios.

 * This would be a new replication system alongside current; could

   evolve to replace/improve old?

Duo offered to look into means of replicating the master-local Region

out to RegionServers.

Next up was discussion constrasting ROOT as a standalone table vs

First-meta Region-as-Root (see 4.1.1 hbase:meta,,1 as ROOT [4]); i.e.

options 2 and 3 for how we'd implement a ROOT. One item that came up

was whether a need to specify one replica count for a ROOT table vs

another for hbase:meta. If so, then it would be argument for ROOT as

standalone table (Others of us argued it not a concern of consequence).

If ROOT access is behind a new simple API in ConnectionRegistry, how

to stop clients reading hbase:meta table if not Master or fronted by

a ConnectionRegistry request? (Should be able to switch on client

identity/source). One suggestion for First-meta-Region-as-ROOT was

NOT returning the first Region to the client post-meta split when

accessing via the simple API. Some concern this would confuse old

Clients (Francis was going to take a look).

Moved to discussion how we'd move ConnectionRegistry from

hosted-by-Master to hosted-by-RegionServers. How to bootstrap such a

system came up? Where do clients go? How do they know which

RegionServers (special regionserver group?  Every RS fields

ConnectionRegistry requests but only designated core serve the ROOT

lookup APIs?). This was a TODO.

This led naturally into 4.1.5 System RS group for client meta services

[5], a new addition under Brainstorming. Discussion. Bharath to look

into feedback.

On the end of the discussion, group expressed support for adding

simple API to the ConnectionRegistry to hide ROOT implementation

detail from client. Support was expressed for moving ConnectionRegistry

from Master to RegionServers. Intent is to move forward on design of

these pieces: e.g. how client bootstraps.

Support was expressed for getting at least the bones of a split meta

into an hbase3 before the RCs.

Where we'd actually store hbase:meta Region locations -- i.e. how a

"ROOT' would be implemented -- was for our next meeting informed by

research of the various approaches noted mostly above. It was

also thought that the new ConnectionRegistry should not preclude

making progress on the "ROOT" implementation.

Will post notice of next meeting (next Weds or the one

following).

1.
https://docs.google.com/document/d/11ChsSb2LGrSzrSJz8pDCAw5IewmaMV0ZDN1LrMkAj4s/edit#heading=h.wr0fwvw06j7n

2.
https://docs.google.com/document/d/11ChsSb2LGrSzrSJz8pDCAw5IewmaMV0ZDN1LrMkAj4s/edit#heading=h.9s666p6no9cq

3.
https://docs.google.com/document/d/11ChsSb2LGrSzrSJz8pDCAw5IewmaMV0ZDN1LrMkAj4s/edit#heading=h.90th11txi153

4.
https://docs.google.com/document/d/11ChsSb2LGrSzrSJz8pDCAw5IewmaMV0ZDN1LrMkAj4s/edit#heading=h.ikbhxlcthjle

5.
https://docs.google.com/document/d/11ChsSb2LGrSzrSJz8pDCAw5IewmaMV0ZDN1LrMkAj4s/edit#heading=h.utoenf10t05b

On Tue, Jul 20, 2021 at 11:00 AM Stack  wrote:

> Lets meet tomorrow. Please review the design doc "Design/Brainstorming"
> Section 4.1 [1] before the meeting if you can (No harm if a refresh of the
> requirements section while you are at it).
>
> Topic: Split Meta Design Reset Status
> Time: Jul 21, 2021 05:00 PM Pacific Time (US and Canada)
>
> Join Zoom Meeting
> https://us04web.zoom.us/j/77318920525?pwd=OFZXZFVPSHJLaGNsby9SN25OV1F2Zz09
>
> Meeting ID: 773 1892 0525
> Passcode: hbase
>
> Thanks,
> S
>
> 1. 1.
> https://docs.google.com/document/d/11ChsSb2LGrSzrSJz8pDCAw5IewmaMV0ZDN1LrMkAj4s/edit#heading=h.wr0fwvw06j7n
>
> On Thu, Jul 8, 2021 at 1:04 PM Stack  wrote:
>
>> Meeting notes (Meeting happend after I figured the zoom had a 'waiting
>> room' -- sorry Duo)
>>
>> Split Meta Status Zoom Meeting
>> Wed Jul  7, 2021 @ 5pm for ~90minutes
>> Attendees: Duo, Francis, Stack, and Clay
>> Agenda: Mainly talk about the one-pager design and PoC proposed in [2]
>> 

[jira] [Resolved] (HBASE-26094) In branch-1 L2 BC should not be the victimhandler of L1 BC when using combined BC

2021-07-22 Thread Reid Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan resolved HBASE-26094.
---
Fix Version/s: 1.7.2
 Hadoop Flags: Reviewed
   Resolution: Fixed

> In branch-1 L2 BC should not be the victimhandler of L1 BC when using 
> combined BC
> -
>
> Key: HBASE-26094
> URL: https://issues.apache.org/jira/browse/HBASE-26094
> Project: HBase
>  Issue Type: Bug
>  Components: BlockCache
>Affects Versions: 1.7.0
>Reporter: Yutong Xiao
>Assignee: Yutong Xiao
>Priority: Major
> Fix For: 1.7.2
>
>
> Currently in branch-1, the block cache initialisation is:
> {code:java}
>   LruBlockCache l1 = getL1(conf);
> // blockCacheDisabled is set as a side-effect of getL1Internal(), so 
> check it again after the call.
> if (blockCacheDisabled) return null;
> BlockCache l2 = getL2(conf);
> if (l2 == null) {
>   GLOBAL_BLOCK_CACHE_INSTANCE = l1;
> } else {
>   boolean useExternal = conf.getBoolean(EXTERNAL_BLOCKCACHE_KEY, 
> EXTERNAL_BLOCKCACHE_DEFAULT);
>   boolean combinedWithLru = conf.getBoolean(BUCKET_CACHE_COMBINED_KEY,
> DEFAULT_BUCKET_CACHE_COMBINED);
>   if (useExternal) {
> GLOBAL_BLOCK_CACHE_INSTANCE = new InclusiveCombinedBlockCache(l1, l2);
>   } else {
> if (combinedWithLru) {
>   GLOBAL_BLOCK_CACHE_INSTANCE = new CombinedBlockCache(l1, l2);
> } else {
>   // L1 and L2 are not 'combined'.  They are connected via the 
> LruBlockCache victimhandler
>   // mechanism.  It is a little ugly but works according to the 
> following: when the
>   // background eviction thread runs, blocks evicted from L1 will go 
> to L2 AND when we get
>   // a block from the L1 cache, if not in L1, we will search L2.
>   GLOBAL_BLOCK_CACHE_INSTANCE = l1;
> }
>   }
>   l1.setVictimCache(l2);
> }
> {code}
> As the code above, L2 will always be the victimhandler of L1, no matter if we 
> use combined blockcache or not. But as logic (in master & branch-2) L2 should 
> not be the victimhandler of L1 when using combined BC. We should set the 
> victimhandler only when we use InclusiveConbinedBC or we do not use 
> CombinedBC.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26115) ServerTestHBaseCluster interface for testing coprocs

2021-07-22 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created HBASE-26115:
---

 Summary: ServerTestHBaseCluster interface for testing coprocs
 Key: HBASE-26115
 URL: https://issues.apache.org/jira/browse/HBASE-26115
 Project: HBase
  Issue Type: Test
Affects Versions: 3.0.0-alpha-1
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby


The new TestingHBaseCluster introduced in HBASE-26080 provides a clean way for 
downstream developers writing features using the HBase client APIs to test 
their code. Its inner minicluster class, HBaseTestingUtil, was left unexposed 
with an interface audience of Phoenix, because coprocessors might need access 
to the internals of HBase itself to be tested.

Occasionally, a developer outside of HBase and Phoenix might need the same 
access to the internals. One way to do this would be to introduce a new 
interface, ServerTestHBaseCluster, that extends TestingHBaseCluster and exposes 
the HBaseTestingUtil, with an interface audience of COPROC, REPLICATION (for 
custom endpoints), PHOENIX.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26108) add option to disable scanMetrics in TableSnapshotInputFormat

2021-07-22 Thread Huaxiang Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huaxiang Sun resolved HBASE-26108.
--
Fix Version/s: 2.4.5
   3.0.0-alpha-2
   2.3.6
   Resolution: Fixed

> add option to disable scanMetrics in TableSnapshotInputFormat
> -
>
> Key: HBASE-26108
> URL: https://issues.apache.org/jira/browse/HBASE-26108
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.3.5
>Reporter: Huaxiang Sun
>Assignee: Huaxiang Sun
>Priority: Major
> Fix For: 2.3.6, 3.0.0-alpha-2, 2.4.5
>
>
> When running spark job with TableSnapshotInputFormat, we found that scan is 
> very slower. We found that scanMetrics is hardcoded as enabled, spark's 
> newAPIHadoopRDD uses DummyReporter in hadoop, which causes the following 
> exception and 80% cpu time is spent on this exception handling. 
> Need to provide an option to disable scanMetrics.
> java.base@11.0.5/java.lang.Throwable.fillInStackTrace(Native Method)
> java.base@11.0.5/java.lang.Throwable.fillInStackTrace(Throwable.java:787) => 
> holding Monitor(java.util.MissingResourceException@258206255})
> java.base@11.0.5/java.lang.Throwable.(Throwable.java:292)
> java.base@11.0.5/java.lang.Exception.(Exception.java:84)
> java.base@11.0.5/java.lang.RuntimeException.(RuntimeException.java:80)
> java.base@11.0.5/java.util.MissingResourceException.(MissingResourceException.java:85)
> java.base@11.0.5/java.util.ResourceBundle.throwMissingResourceException(ResourceBundle.java:2055)
> java.base@11.0.5/java.util.ResourceBundle.getBundleImpl(ResourceBundle.java:1689)
> java.base@11.0.5/java.util.ResourceBundle.getBundleImpl(ResourceBundle.java:1593)
> java.base@11.0.5/java.util.ResourceBundle.getBundle(ResourceBundle.java:1284)
> app//org.apache.hadoop.mapreduce.util.ResourceBundles.getBundle(ResourceBundles.java:37)
> app//org.apache.hadoop.mapreduce.util.ResourceBundles.getValue(ResourceBundles.java:56)
>  => holding Monitor(java.lang.Class@545605549})
> app//org.apache.hadoop.mapreduce.util.ResourceBundles.getCounterGroupName(ResourceBundles.java:77)
> app//org.apache.hadoop.mapreduce.counters.CounterGroupFactory.newGroup(CounterGroupFactory.java:94)
> app//org.apache.hadoop.mapreduce.counters.AbstractCounters.getGroup(AbstractCounters.java:227)
> app//org.apache.hadoop.mapreduce.counters.AbstractCounters.findCounter(AbstractCounters.java:154)
> app//org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl$DummyReporter.getCounter(TaskAttemptContextImpl.java:110)
> app//org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.getCounter(TaskAttemptContextImpl.java:76)
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.updateCounters(TableRecordReaderImpl.java:311)
> org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat$TableSnapshotRegionRecordReader.nextKeyValue(TableSnapshotInputFormat.java:167)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Asking about HBase

2021-07-22 Thread Febiola Simangunsong
Hello sir, i want to ask about "Hmaster dead after a few second when i run
hbase"
Can you help me to solve that problem
Thank you, sir


[jira] [Created] (HBASE-26114) when “hbase.mob.compaction.threads.max” is set to a negative number, HMaster cannot start normally

2021-07-22 Thread Jingxuan Fu (Jira)
Jingxuan Fu created HBASE-26114:
---

 Summary: when “hbase.mob.compaction.threads.max” is set to a 
negative number, HMaster cannot start normally 
 Key: HBASE-26114
 URL: https://issues.apache.org/jira/browse/HBASE-26114
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 2.4.4, 2.2.0
 Environment: HBase 2.2.2
os.name=Linux
os.arch=amd64
os.version=5.4.0-72-generic
java.version=1.8.0_191
java.vendor=Oracle Corporation
Reporter: Jingxuan Fu
 Fix For: 3.0.0-alpha-1


In hbase-default.xml:
  
{code:java}
 
hbase.mob.compaction.threads.max 
1 
   
  The max number of threads used in MobCompactor. 
   
{code}
 

When the value is set to a negative number, such as -1, Hmaster cannot start 
normally.

The log file will output:
  
{code:cpp}
2021-07-22 18:54:13,758 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
master.HMaster: Failed to become active master 
java.lang.IllegalArgumentException
at 
java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)
  at 
org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
at org.apache.hadoop.hbase.master.MobCompactionChore.
(MobCompactionChore.java:51)   at 
org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
  at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
 
at 
org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580)
at java.lang.Thread.run(Thread.java:748) 

2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
master.HMaster: Master server abort: loaded coprocessors are: 
[org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 
2021-07-22 18:54:13,760 ERROR [master/JavaFuzz:16000:becomeActiveMaster] 
master.HMaster: * ABORTING master javafuzz,16000,1626951243154: Unhandled 
exception. Starting shutdown. * java.lang.IllegalArgumentException 
at 
java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1314)   
at 
org.apache.hadoop.hbase.mob.MobUtils.createMobCompactorThreadPool(MobUtils.java:880)
 
at 
org.apache.hadoop.hbase.master.MobCompactionChore.(MobCompactionChore.java:51)
 
  at org.apache.hadoop.hbase.master.HMaster.initMobCleaner(HMaster.java:1278) 
  at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1161)
 
  at 
org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2112)
 
at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580) 
  at java.lang.Thread.run(Thread.java:748) 

2021-07-
22 18:54:13,760 INFO  [master/JavaFuzz:16000:becomeActiveMaster] 
regionserver.HRegionServer: * STOPPING region server 
'javafuzz,16000,1626951243154' *{code}
 

In MobUtils.java(package org.apache.hadoop.hbase.mob) 
This method from version 2.2.0 to version 2.4.4 is the same
{code:java}
  public static ExecutorService createMobCompactorThreadPool(Configuration 
conf) { int maxThreads = 
conf.getInt(MobConstants.MOB_COMPACTION_THREADS_MAX, 
MobConstants.DEFAULT_MOB_COMPACTION_THREADS_MAX); if (maxThreads == 0) { 
maxThreads = 1;    
 } 
final SynchronousQueue queue = new SynchronousQueue<>(); 
ThreadPoolExecutor pool = new ThreadPoolExecutor(1, maxThreads, 60, 
TimeUnit.SECONDS, queue,   Threads.newDaemonThreadFactory("MobCompactor"), 
new RejectedExecutionHandler() {
  @Override
 public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) 
{   
try { 
// waiting for a thread to pick up instead of throwing 
exceptions.     
    queue.put(r);   
} catch (InterruptedException e) { 
throw new RejectedExecutionException(e);   
} 
  }   
}); 
((ThreadPoolExecutor) pool).allowCoreThreadTimeOut(true); 
return pool;   }{code}
When MOB_COMPACTION_THREADS_MAX is set to 0, mobUtil will set it to 1. But the 
program does not take into account that it is set to a negative number. When it 
is set to a negative number, the initialization of the ThreadPoolExecutor will 
fail and an IllegalArgumentException will be thrown, making HMaster fail to 
start.

Sometimes users will use -1 as the value of the default item in the 
configuration file.

Therefore, it is best to modify the source code, when the value is negative, 
also set its _maxThread_ to 1.

Only need to modify 

    if (maxThreads == 0) {

to 

    if (maxThreads <= 0) {
  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26113) Hmaster dead after a few second when i run hbase

2021-07-22 Thread Pankaj Kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar resolved HBASE-26113.
--
Resolution: Not A Problem

JIRA is for bug tracking. For discussion or queries, please feel free to send 
mail to u...@hbase.apache.org or dev@hbase.apache.org.

Closing it for now, please reopen if required,

> Hmaster dead after a few second when i run hbase
> 
>
> Key: HBASE-26113
> URL: https://issues.apache.org/jira/browse/HBASE-26113
> Project: HBase
>  Issue Type: Test
>  Components: build, Client, master
>Affects Versions: 3.0.0-alpha-1
>Reporter: Yose Simamora
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
> Attachments: Screenshot from 2021-07-22 16-39-47.png
>
>
> first hmaster runs but after a few seconds hmaster shuts down 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24734) RegionInfo#containsRange should support check meta table

2021-07-22 Thread Yi Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei resolved HBASE-24734.

Fix Version/s: 2.4.5
   3.0.0-alpha-2
   2.3.6
   2.5.0
   Resolution: Fixed

> RegionInfo#containsRange should support check meta table
> 
>
> Key: HBASE-24734
> URL: https://issues.apache.org/jira/browse/HBASE-24734
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, MTTR
>Reporter: Michael Stack
>Priority: Major
> Fix For: 2.5.0, 2.3.6, 3.0.0-alpha-2, 2.4.5
>
>
> Came across this when we were testing the 'split-to-hfile' feature running 
> ITBLL:
>  
> {code:java}
> 2020-07-10 10:16:49,983 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> Closing region hbase:meta,,1.15882307402020-07-10 10:16:49,997 INFO 
> org.apache.hadoop.hbase.regionserver.HRegion: Closed 
> hbase:meta,,1.15882307402020-07-10 10:16:49,998 WARN 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler: Fatal error 
> occurred while opening region hbase:meta,,1.1588230740, 
> aborting...java.lang.IllegalArgumentException: Invalid range: 
> IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. 
> > 
> IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387.
> at 
> org.apache.hadoop.hbase.client.RegionInfoBuilder$MutableRegionInfo.containsRange(RegionInfoBuilder.java:300)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.tryCommitRecoveredHFile(HStore.java:)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.loadRecoveredHFilesIfAny(HRegion.java:5442)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1010)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:950) 
>at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7490)   
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7448)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7424)   
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7382)   
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7333)   
>  at 
> org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)  
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)2020-07-10 
> 10:16:50,005 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: * 
> ABORTING region server hbasedn149.example.org,16020,1594375563853: Failed to 
> open region hbase:meta,,1.1588230740 and can not recover 
> *java.lang.IllegalArgumentException: Invalid range: 
> IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. 
> > 
> IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387.
>  {code}
> Seems basic case of wrong comparator. Below passes if I use the meta 
> comparator
> {code:java}
>  @Test
> public void testBinaryKeys() throws Exception {
>   Set set = new TreeSet<>(CellComparatorImpl.COMPARATOR);
>   final byte [] fam = Bytes.toBytes("col");
>   final byte [] qf = Bytes.toBytes("umn");
>   final byte [] nb = new byte[0];
>   Cell [] keys = {
>   createByteBufferKeyValueFromKeyValue(
>   new KeyValue(Bytes.toBytes("a,\u\u,2"), fam, qf, 2, 
> nb)),
>   createByteBufferKeyValueFromKeyValue(
>   new KeyValue(Bytes.toBytes("a,\u0001,3"), fam, qf, 3, nb)),
>   createByteBufferKeyValueFromKeyValue(
>   new KeyValue(Bytes.toBytes("a,,1"), fam, qf, 1, nb)),
>   createByteBufferKeyValueFromKeyValue(
>   new KeyValue(Bytes.toBytes("a,\u1000,5"), fam, qf, 5, nb)),
>   createByteBufferKeyValueFromKeyValue(
>   new KeyValue(Bytes.toBytes("a,a,4"), fam, qf, 4, nb)),
>   createByteBufferKeyValueFromKeyValue(
>   new KeyValue(Bytes.toBytes("a,a,0"), fam, qf, 0, nb)),
>   };
>   // Add to set with bad comparator
>   Collections.addAll(set, keys);
>   // This will output the keys incorrectly.
>   boolean assertion = false;
>   int count = 0;
>   try {
> for (Cell k: set) {
>   assertTrue("count=" + count + ", " + k.toString(), count++ == 
> k.getTimestamp());
> }
>   } catch (AssertionError e) {
> // Expected
> assertion = true;
>   }
>   assertTrue(assertion);
>   // 

[jira] [Created] (HBASE-26113) Hmaster dead after a few second when i run hbase

2021-07-22 Thread Yose Simamora (Jira)
Yose Simamora created HBASE-26113:
-

 Summary: Hmaster dead after a few second when i run hbase
 Key: HBASE-26113
 URL: https://issues.apache.org/jira/browse/HBASE-26113
 Project: HBase
  Issue Type: Test
  Components: build, Client, master
Affects Versions: 3.0.0-alpha-1
Reporter: Yose Simamora
 Fix For: 3.0.0-alpha-1
 Attachments: Screenshot from 2021-07-22 16-39-47.png

first hmaster runs but after a few seconds hmaster shuts down 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26112) Error when running hbase with mvn package -DskipTests

2021-07-22 Thread Yose Simamora (Jira)
Yose Simamora created HBASE-26112:
-

 Summary: Error when running hbase with mvn package -DskipTests
 Key: HBASE-26112
 URL: https://issues.apache.org/jira/browse/HBASE-26112
 Project: HBase
  Issue Type: Test
  Components: build
Affects Versions: 2.4.1
Reporter: Yose Simamora
 Fix For: 2.4.1
 Attachments: Screenshot from 2021-07-22 16-20-02.png

I run the source code of hbase with mvn package -DskipTests but there are 
errors when i run it  !Screenshot from 2021-07-22 16-20-02.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26107) MOB compaction with missing files catches incorrect exception

2021-07-22 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi resolved HBASE-26107.
---
Fix Version/s: 3.0.0-alpha-2
   Resolution: Fixed

Thanks for the reviews [~zhangduo] and [~pankajkumar]! Pushed to master.

> MOB compaction with missing files catches incorrect exception
> -
>
> Key: HBASE-26107
> URL: https://issues.apache.org/jira/browse/HBASE-26107
> Project: HBase
>  Issue Type: Bug
>  Components: mob
>Affects Versions: 3.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: 3.0.0-alpha-2
>
>
> The MOB compaction catches FileNotFoundException when 
> {{hbase.unsafe.mob.discard.miss}} is true to handle missing MOB cells. The 
> FNFE is wrapped in DoNotRetryIOException so the compaction fails for the 
> given region.
> {noformat}
> 2021-07-21 13:51:05,880 WARN 
> org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor: 
> hbase.unsafe.mob.discard.miss=true. This is unsafe setting recommended only 
> when first upgrading to a version with the distributed mob compaction feature 
> on a cluster that has experienced MOB data corruption.
> 2021-07-21 13:51:05,880 WARN 
> org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor: 
> hbase.unsafe.mob.discard.miss=true. This is unsafe setting recommended only 
> when first upgrading to a version with the distributed mob compaction feature 
> on a cluster that has experienced MOB data corruption.
> 2021-07-21 13:51:05,880 INFO 
> org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor: Compact MOB=true 
> optimized configured=false optimized enabled=false maximum MOB file 
> size=1073741824 major=true store=[table=IntegrationTestIngestWithMOB 
> family=test_cf region=3a2ee81f9244c39ba61d694e616c1a89]
> 2021-07-21 13:51:05,880 INFO 
> org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor: Compact MOB=true 
> optimized configured=false optimized enabled=false maximum MOB file 
> size=1073741824 major=true store=[table=IntegrationTestIngestWithMOB 
> family=test_cf region=7a96f55bb9ae04500a06cbaef02da6a3]
> 2021-07-21 13:51:05,888 INFO 
> org.apache.hadoop.hbase.regionserver.RSRpcServices: Compacting 
> IntegrationTestIngestWithMOB,,1626787996628.c71cad04514b17ee86a407490bd27424.
> 2021-07-21 13:51:05,891 INFO 
> org.apache.hadoop.hbase.regionserver.RSRpcServices: Compacting 
> IntegrationTestIngestWithMOB,,1626787996628.8fd002bda07755decda67b7084d1e0f6.
> 2021-07-21 13:51:05,895 ERROR org.apache.hadoop.hbase.regionserver.HMobStore: 
> The mob file 
> 1bbd886460827015e5d605ed44252251202107200e5065290b424e38992f5556d9943b6a_7a96f55bb9ae04500a06cbaef02da6a3
>  could not be found in the locations 
> [hdfs://example.com:8020/hbase/mobdir/data/default/IntegrationTestIngestWithMOB/e9b5d936e7f55a4f1c3246a8d5ce5
> 3c2/test_cf, 
> hdfs://example.com:8020/hbase/archive/data/default/IntegrationTestIngestWithMOB/e9b5d936e7f55a4f1c3246a8d5ce53c2/test_cf]
>  or it is corrupt
> 2021-07-21 13:51:05,895 INFO 
> org.apache.hadoop.hbase.regionserver.throttle.PressureAwareThroughputController:
>  7a96f55bb9ae04500a06cbaef02da6a3#test_cf#compaction#1 average throughput is 
> 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 1 active 
> operations remaining, total limit is 10.00 MB/second
> 2021-07-21 13:51:05,908 INFO 
> org.apache.hadoop.hbase.regionserver.RSRpcServices: Compacting 
> IntegrationTestIngestWithMOB,,1626787996628.53186ca5008e3a964eee5f96ee3f1b26.
> 2021-07-21 13:51:05,997 ERROR 
> org.apache.hadoop.hbase.regionserver.CompactSplit: Compaction failed 
> Request=regionName=IntegrationTestIngestWithMOB,,1626787996628.7a96f55bb9ae04500a06cbaef02da6a3.,
>  storeName=test_cf, fileCount=1, fileSize=110.6 M (110.6 M), priority=1, 
> time=1626875465819
> java.io.IOException: Mob compaction failed for region: 
> 7a96f55bb9ae04500a06cbaef02da6a3
> at 
> org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.performCompaction(DefaultMobStoreCompactor.java:575)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:327)
> at 
> org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.compact(DefaultMobStoreCompactor.java:227)
> at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1407)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2183)
> at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:633)
> at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:675)
> at 
>