Hi Bob,

Ambari is filtering some mount point has being not valid places and /home is 
one of them. BTW, I’ve notice that you are use LVM – it’s recommended to not 
use LVM for hadoop data disk ( it adds a unnecessary overhead ) not a big deal 
if it’s just a play cluster.

Given that you are using LVM, you could take a backup of everything under /home 
destroy the LV and create a smaller one for home and a big one for hadoop.

Stop your cluster

# tar cvf home_backup.tar /home/*
# umount /home
# lvremove /dev/mapper/centos-home
# lvcreate –L +10G –n home centos
# mkfs.ext4 /dev/mapper/centos-home
# lvcreate –L +2.5T –n hadoop centos
# mkfs.ext4 /dev/mapper/centos-hadoop
# mount /dev/mapper/centos-home /home
# mount /dev/mapper/centos-hadoop /mnt
# cp –ax /hadoop/* /mnt/
# umount /mnt
# mount /dev/mapper/centos-hadoop /hadoop
# tar xvf home_backup.tar –C /home

Restart your cluster, check that it’s working – if it does, you should umount 
/hadoop and clean the folder underneath and then remount. Do not forget to also 
edit your /etc/fstab to get /hadoop mounted automatically at boot.

Thanks
Olivier

From: "Adaryl Wakefield, MBA Bob" 
<adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Date: Wednesday, 4 November 2015 at 23:02
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
<user@ambari.apache.org<mailto:user@ambari.apache.org>>
Subject: Re: HDFS disk usage

I’ve been advised to place the data in a directory that is in /home. I 
attempted to do that an got an error that said dfs.datanode.data.dir can’t be 
set to anything in the /home directory.

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData

From: Adaryl "Bob" Wakefield, MBA<mailto:adaryl.wakefi...@hotmail.com>
Sent: Wednesday, November 04, 2015 2:18 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: HDFS disk usage

I dug into this a little bit more. Apparently the default is 50GB.

On my datanodes, I ran df –hl. This is the output
Filesystem      Size    Used    Avail   Use%    Mounted on
/dev/mapper/centos-root 50G     12G     39G     23%     /
devtmpfs        16G     0       16G     0%      /dev
tmpfs   16G     0       16G     0%      /dev/shm
tmpfs   16G     1.4G    15G     9%      /run
tmpfs   16G     0       16G     0%      /sys/fs/cgroup
/dev/sda2       494M    123M    372M    25%     /boot
/dev/mapper/centos-home 2.7T    33M     2.7T    1%      /home

If I’m reading this right it HAS to be mounted correctly because no other mount 
points have enough space correct?


Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData

From: Adaryl "Bob" Wakefield, MBA<mailto:adaryl.wakefi...@hotmail.com>
Sent: Tuesday, November 03, 2015 10:32 AM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: HDFS disk usage

How would I go about doing that?

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData

From: Olivier Renault<mailto:orena...@hortonworks.com>
Sent: Tuesday, November 03, 2015 3:49 AM
To: user@ambari.apache.org<mailto:user@ambari.apache.org> ; 
user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: HDFS disk usage


Could you double check that your datanodes are using the correct mount point?

Thanks,
Olivier
------
Olivier Renault
Solution Engineer
Mobile: +44 7500 933 036



On Tue, Nov 3, 2015 at 1:45 AM -0800, "Adaryl "Bob" Wakefield, MBA" 
<adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>> wrote:

Why is there such a large discrepancy between what is reported and my actual 
disk size?

B.

From: Olivier Renault<mailto:orena...@hortonworks.com>
Sent: Tuesday, November 03, 2015 3:01 AM
To: user@ambari.apache.org<mailto:user@ambari.apache.org> ; 
user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: HDFS disk usage


It reports the space available for HDFS.

Thanks,
Olivier



On Mon, Nov 2, 2015 at 11:57 PM -0800, "Adaryl "Bob" Wakefield, MBA" 
<adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>> wrote:

On the dashboard, what exactly is HDFS disk usage reporting? The numbers I’m 
seeing are WAY less than the total disk space on my cluster.

B.

Reply via email to