The error message says it can not load native lz4 library to read the HFiles.

Please check whether the native libraries for lz4 are all correctly
located and linkable.

Manimekalai K <[email protected]> 于2024年10月29日周二 18:23写道:
>
> Dear HBase Community,
>
> We are currently encountering an issue in HBase version 2.5.4 where regions 
> are not splitting as expected, causing the region size to exceed the 
> configured limits.
>
> Issue Details:
>
> HBase Version: 2.5.4
> Hadoop Version: 3.3.5
> Problem: Regions are not splitting according to the configured thresholds, 
> resulting in regions growing beyond the set size limits.
> Impact: This can potentially lead to increased I/O load and reduced 
> performance due to oversized regions.
>
> We have attached the relevant trace for further context.
>>
>> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
>> reading data index and meta index from file 
>> hdfs://clustername/hbasedata/data/namespace/Table1/00d39663375797659886f6cf865fe9bf/ATTACHMENT_STREAM/b2be642f2d894d3eb8837ec7e4f796bb
>>         at 
>> org.apache.hadoop.hbase.io.hfile.HFileInfo.initMetaAndIndex(HFileInfo.java:392)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.regionserver.HStoreFile.open(HStoreFile.java:394) 
>> ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.regionserver.HStoreFile.initReader(HStoreFile.java:518)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.splitStoreFile(HRegionFileSystem.java:693)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.splitStoreFile(SplitTableRegionProcedure.java:806)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.access$000(SplitTableRegionProcedure.java:98)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure$StoreFileSplitter.call(SplitTableRegionProcedure.java:839)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure$StoreFileSplitter.call(SplitTableRegionProcedure.java:820)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
>> ~[?:1.8.0_181]
>>         at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>  ~[?:1.8.0_181]
>>         at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>  ~[?:1.8.0_181]
>>         at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181]
>> Caused by: java.lang.UnsatisfiedLinkError: 
>> org.apache.hadoop.io.compress.lz4.Lz4Decompressor.decompressBytesDirect()I
>>         at 
>> org.apache.hadoop.io.compress.lz4.Lz4Decompressor.decompressBytesDirect(Native
>>  Method) ~[hadoop-common-3.2.4.jar:?]
>>         at 
>> org.apache.hadoop.io.compress.lz4.Lz4Decompressor.decompress(Lz4Decompressor.java:231)
>>  ~[hadoop-common-3.2.4.jar:?]
>>         at 
>> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88)
>>  ~[hadoop-common-3.2.4.jar:?]
>>         at 
>> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
>>  ~[hadoop-common-3.2.4.jar:?]
>>         at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
>> ~[?:1.8.0_181]
>>         at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) 
>> ~[?:1.8.0_181]
>>         at java.io.BufferedInputStream.read(BufferedInputStream.java:345) 
>> ~[?:1.8.0_181]
>>         at 
>> org.apache.hadoop.hbase.io.util.BlockIOUtils.readFullyWithHeapBuffer(BlockIOUtils.java:151)
>>  ~[hbase-common-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultDecodingContext.prepareDecoding(HFileBlockDefaultDecodingContext.java:104)
>>  ~[hbase-common-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.io.hfile.HFileBlock.unpack(HFileBlock.java:644) 
>> ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl$1.nextBlock(HFileBlock.java:1397)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl$1.nextBlockWithBlockType(HFileBlock.java:1407)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.io.hfile.HFileInfo.initMetaAndIndex(HFileInfo.java:365)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.regionserver.HStoreFile.open(HStoreFile.java:394) 
>> ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.regionserver.HStoreFile.initReader(HStoreFile.java:518)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.splitStoreFile(HRegionFileSystem.java:693)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.splitStoreFile(SplitTableRegionProcedure.java:806)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.access$000(SplitTableRegionProcedure.java:98)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure$StoreFileSplitter.call(SplitTableRegionProcedure.java:839)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at 
>> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure$StoreFileSplitter.call(SplitTableRegionProcedure.java:820)
>>  ~[hbase-server-2.5.4-hadoop3.jar:2.5.4-hadoop3]
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
>> ~[?:1.8.0_181]
>>         at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>  ~[?:1.8.0_181]
>>         at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>  ~[?:1.8.0_181]
>
>
> If anyone has encountered a similar issue or has suggestions, please share 
> any possible workarounds.
>
> Thank you in advance.
>
>
> Regards,
> Manimekalai K

Reply via email to