Re: Failed to place enough replicas, All required storage types are unavailable

2021-12-13 Thread Venkatesulu Guntakindapalli
Hi,

Did you check the available free space on HDFS cluster, the warn usually
comes when there is no sufficient storage on HDFS,

Check you NameNode web page, there you can easily find how much free space
left on DISK tier.

On Mon, Dec 13, 2021 at 7:31 PM Claude M  wrote:

>
>
> Hello,
>
> I recently updated our Hadoop & HBase cluster from Hadoop 2.7.7 to 3.2.2
> and HBase 1.3.2 to 2.3.5.  After the upgrade, we noticed instability w/ the
> Hadoop cluster as the HBase master would crash due to WALs of zero file
> length which seemed to have been caused by Hadoop failing to write.
>
> Our Flink cluster was also having problems when creating checkpoints in
> Hadoop w/ following message:
> java.io.IOException: Unable to create new block.
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1398)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
> 2021-12-10 03:14:11,259 WARN  org.apache.hadoop.hdfs.DFSClient - Could not
> get block locations.
>
> There is one warning message that is appearing in the hadoop log every
> four minutes which we think may be causing the instability.
>
> WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy:
> Failed to place enough replicas, still in need of 1 to reach 3
> (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]},
> newBlock=true) All required storage types are unavailable:
>  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
>
> What does this mean and how can it be resolved?
>
> Also noticed in the documentation here
> https://hbase.apache.org/book.html#hadoop that although HBase 2.3.x has
> been tested to be fully functional with Hadoop 3.2.x, it states the
> following:
>
> Hadoop 3.x is still in early access releases and has not yet been
> sufficiently tested by the HBase community for production use cases.
>
> Is this statement still true as Hadoop 3.x as been released for some time
> now.
>
> Any assistance is greatly appreciated.
>
>
> Thanks
>


-- 
Thanks & Regards,

Venkatesh
SRE, Media.net (Data Platform)
Flock Id- venkatesul...@media.net
Contact - 9949522101


Re: After deleting data of Hbase table hdfs size is not decreasing HDFS-15812

2021-02-11 Thread Venkatesulu Guntakindapalli
Hi,

can you check whether snapshots had taken on hbase table in the past,

On Thu, Feb 11, 2021 at 4:14 PM Ayush Saxena  wrote:

> Not sure how hbase or pheonix handle stuff, but do you see the
> directory/file deleted in HDFS, can check if the file you are deleting is
> getting deleted, i.e It exists before and once you execute your stuff it
> isn’t there,
>
> Are HDFS Snapshots enabled? Not on the directory but any of its parent
> also? You can do lsSnapshottableDir, from a super user and check, or some
> better way, may be Namenode UI used to show
>
> Is trash enabled?
>
> Can you get the Audit log entry for the file you deleted.
>
> If still things are perfect can you get the blocks and DN locations of the
> file before deleting, FSCK should help here, and check using the block ids
> using FSCK. If the blocks persist.
>
>
> -Ayush
>
> On 11-Feb-2021, at 12:14 PM, satya prakash gaurav 
> wrote:
>
> 
> Hi Team,
> Can anyone please help on this issue?
>
> Regards,
> Satya
>
> On Wed, Feb 3, 2021 at 7:27 AM satya prakash gaurav 
> wrote:
>
>> Hi Team,
>>
>> I have raised a jira HDFS-15812
>> We are using the hdp 3.1.4.0-315 and hbase 2.0.2.3.1.4.0-315.
>>
>> We are deleting the data with normal hbase delete command and even with
>> api using phoenix. The count is reducing on phoenix and hbase but the
>> Hdfs size of the hbase directory is not reducing even I ran the major
>> compaction.
>>
>> Regards,
>> Satya
>>
>>
>>
>
> --
> --
> Regards,
> S.P.Gaurav
>
>

-- 
Thanks & Regards,

Venkatesh
SRE, Media.net (Data Platform)
Flock Id- venkatesul...@media.net
Contact - 9949522101


Re: Hadoop monitoring using Prometheus

2020-06-03 Thread Venkatesulu Guntakindapalli
Hi,

You can use JMX  exporter that gives you a protheus target, this
https://github.com/prometheus/jmx_exporter. will help in your case.

On Wed, Jun 3, 2020 at 3:33 PM Sanel Zukan  wrote:

> ravi kanth  writes:
> > Wei-Chiu,
> >
> > Thanks for the response. Are there any workarounds for Hadoop 3.1.0? As
> > this is a critical production cluster it needs immediate attention.
>
> My approach was a bit different.
>
> Instead of using Prometheus to pull the metrics, I used collectd [1] to
> collect system and JMX metrics, send that to Riemann [2] and store it in
> database (in my case OpenTSDB, but changing it to Prometheus pushgateway
> address would be couple of lines of code in Riemann).
>
> Inside Riemann, you can tag, shape and calculate metrics the way you
> like.
>
> There is also standalone app [3] you can run to watch for multiple JMX
> connections, but I haven't tried that.
>
> [1] https://collectd.org/
> [2] http://riemann.io/
> [3] https://github.com/twosigma/riemann-jmx
>
> > Thanks,
> > Ravi
>
> Best,
> Sanel
>
> > On Tue, Jun 2, 2020 at 5:59 PM Wei-Chiu Chuang 
> wrote:
> >
> >> Check out HADOOP-16398
> >> 
> >> It's a new feature in Hadoop 3.3.0
> >>
> >> Akira might be able to help.
> >>
> >> On Tue, Jun 2, 2020 at 5:56 PM ravi kanth  wrote:
> >>
> >>> Hi Everyone,
> >>>
> >>> We have a production-ready cluster with 35 nodes that we are currently
> >>> using. We are currently using System metrics using Prometheus +
> Grafana to
> >>> visualize server. However, we are more interested in visualizing the
> Hadoop
> >>> & Yarn service level metrics.
> >>>
> >>> I came across hadoop JMX port which exposes all the needed metrics from
> >>> the service. However, I remained unsuccessful in tagging these metrics
> to
> >>> prometheus jmx agent.
> >>>
> >>> Is there anyone who successfully got the JMX monitoring of Hadoop
> >>> components work with Prometheus? Any help is greatly appreciated.
> >>>
> >>> Currently, we have scripts to parse the meaningful values out of JMX
> end
> >>> point of the namenode & datanodes.
> >>>
> >>> Thanks In advance,
> >>>
> >>> Ravi
> >>>
> >>>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org
>
>

-- 
Thanks & Regards,

Venkatesh
SRE, Media.net (Data Platform)
Flock Id- venkatesul...@media.net
Contact - 9949522101


Can I get storage policy ids from fsimage

2018-02-07 Thread Venkatesulu Guntakindapalli
Hi,

I am using offline image viewer to read and analyse fsimage with XML processor. 
1. Can some one explain me the format of the xml file
2. How Can I get the storage policy of the Hadoop files from this xml file.

Thanks 
Venkatesh
Hadoop Developer
-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org