Re: Failed to place enough replicas, All required storage types are unavailable

2021-12-13 Thread Ayush Saxena
Hey,
Looks like issues with the Datanodes. The block placement policy isn’t able to 
get 3 datanodes with available DISK space. Give a check to your Cluster 
datanodes state, types of storages, space in DISK and related stuff.

Try enabling debug logs, you might see logs like Datanode X is not chosen since 


If that is a prod cluster and too busy, the datanodes may be getting excluded 
with NODE_TOO_BUSY. If so try playing with consider load configs like 
dfs.namenode.redundancy.considerLoad 
or
dfs.namenode.redundancy.considerLoad.factor

Or if some other reason for exclusion, you need to fix that accordingly…

-Ayush

> On 13-Dec-2021, at 7:43 PM, Claude M  wrote:
> replicationFallbacks=[ARCHIVE]}
> 
> What does this mean and how can it be resolved?  
> 
> Also


Re: Failed to place enough replicas, All required storage types are unavailable

2021-12-13 Thread Venkatesulu Guntakindapalli
Hi,

Did you check the available free space on HDFS cluster, the warn usually
comes when there is no sufficient storage on HDFS,

Check you NameNode web page, there you can easily find how much free space
left on DISK tier.

On Mon, Dec 13, 2021 at 7:31 PM Claude M  wrote:

>
>
> Hello,
>
> I recently updated our Hadoop & HBase cluster from Hadoop 2.7.7 to 3.2.2
> and HBase 1.3.2 to 2.3.5.  After the upgrade, we noticed instability w/ the
> Hadoop cluster as the HBase master would crash due to WALs of zero file
> length which seemed to have been caused by Hadoop failing to write.
>
> Our Flink cluster was also having problems when creating checkpoints in
> Hadoop w/ following message:
> java.io.IOException: Unable to create new block.
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1398)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
> 2021-12-10 03:14:11,259 WARN  org.apache.hadoop.hdfs.DFSClient - Could not
> get block locations.
>
> There is one warning message that is appearing in the hadoop log every
> four minutes which we think may be causing the instability.
>
> WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy:
> Failed to place enough replicas, still in need of 1 to reach 3
> (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]},
> newBlock=true) All required storage types are unavailable:
>  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
>
> What does this mean and how can it be resolved?
>
> Also noticed in the documentation here
> https://hbase.apache.org/book.html#hadoop that although HBase 2.3.x has
> been tested to be fully functional with Hadoop 3.2.x, it states the
> following:
>
> Hadoop 3.x is still in early access releases and has not yet been
> sufficiently tested by the HBase community for production use cases.
>
> Is this statement still true as Hadoop 3.x as been released for some time
> now.
>
> Any assistance is greatly appreciated.
>
>
> Thanks
>


-- 
Thanks & Regards,

Venkatesh
SRE, Media.net (Data Platform)
Flock Id- venkatesul...@media.net
Contact - 9949522101


Failed to place enough replicas, All required storage types are unavailable

2021-12-13 Thread Claude M
Hello,

I recently updated our Hadoop & HBase cluster from Hadoop 2.7.7 to 3.2.2
and HBase 1.3.2 to 2.3.5.  After the upgrade, we noticed instability w/ the
Hadoop cluster as the HBase master would crash due to WALs of zero file
length which seemed to have been caused by Hadoop failing to write.

Our Flink cluster was also having problems when creating checkpoints in
Hadoop w/ following message:
java.io.IOException: Unable to create new block.
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1398)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
2021-12-10 03:14:11,259 WARN  org.apache.hadoop.hdfs.DFSClient - Could not
get block locations.

There is one warning message that is appearing in the hadoop log every four
minutes which we think may be causing the instability.

WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy:
Failed to place enough replicas, still in need of 1 to reach 3
(unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]},
newBlock=true) All required storage types are unavailable:
 unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}

What does this mean and how can it be resolved?

Also noticed in the documentation here
https://hbase.apache.org/book.html#hadoop that although HBase 2.3.x has
been tested to be fully functional with Hadoop 3.2.x, it states the
following:

Hadoop 3.x is still in early access releases and has not yet been
sufficiently tested by the HBase community for production use cases.

Is this statement still true as Hadoop 3.x as been released for some time
now.

Any assistance is greatly appreciated.


Thanks


Fwd: Failed to place enough replicas, All required storage types are unavailable

2021-12-13 Thread Claude M
Hello,

I recently updated our Hadoop & HBase cluster from Hadoop 2.7.7 to 3.2.2
and HBase 1.3.2 to 2.3.5.  After the upgrade, we noticed instability w/ the
Hadoop cluster as the HBase master would crash due to WALs of zero file
length which seemed to have been caused by Hadoop failing to write.

Our Flink cluster was also having problems when creating checkpoints in
Hadoop w/ following message:
java.io.IOException: Unable to create new block.
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1398)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
2021-12-10 03:14:11,259 WARN  org.apache.hadoop.hdfs.DFSClient - Could not
get block locations.

There is one warning message that is appearing in the hadoop log every four
minutes which we think may be causing the instability.

WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy:
Failed to place enough replicas, still in need of 1 to reach 3
(unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]},
newBlock=true) All required storage types are unavailable:
 unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}

What does this mean and how can it be resolved?

Also noticed in the documentation here
https://hbase.apache.org/book.html#hadoop that although HBase 2.3.x has
been tested to be fully functional with Hadoop 3.2.x, it states the
following:

Hadoop 3.x is still in early access releases and has not yet been
sufficiently tested by the HBase community for production use cases.

Is this statement still true as Hadoop 3.x as been released for some time
now.

Any assistance is greatly appreciated.


Thanks