Hello,

I recently updated our Hadoop & HBase cluster from Hadoop 2.7.7 to 3.2.2
and HBase 1.3.2 to 2.3.5.  After the upgrade, we noticed instability w/ the
Hadoop cluster as the HBase master would crash due to WALs of zero file
length which seemed to have been caused by Hadoop failing to write.

Our Flink cluster was also having problems when creating checkpoints in
Hadoop w/ following message:
java.io.IOException: Unable to create new block.
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1398)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
2021-12-10 03:14:11,259 WARN  org.apache.hadoop.hdfs.DFSClient - Could not
get block locations.

There is one warning message that is appearing in the hadoop log every four
minutes which we think may be causing the instability.

WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy:
Failed to place enough replicas, still in need of 1 to reach 3
(unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]},
newBlock=true) All required storage types are unavailable:
 unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}

What does this mean and how can it be resolved?

Also noticed in the documentation here
https://hbase.apache.org/book.html#hadoop that although HBase 2.3.x has
been tested to be fully functional with Hadoop 3.2.x, it states the
following:

Hadoop 3.x is still in early access releases and has not yet been
sufficiently tested by the HBase community for production use cases.

Is this statement still true as Hadoop 3.x as been released for some time
now.

Any assistance is greatly appreciated.


Thanks

Reply via email to