I hope you understand that you sent 5 emails to several hundred (thousand?)
people in the world in 15 minutes... Please think before hitting this
"send" button.

In Unix (AND windows) you can mount a drive into a folder. This means just
that the disk is accessible from that folder, it does not increase the
capacity of / to mount a 2 TB drive in /home. Nor does it use any space on
/ to do so.
Just think that / is one drive, which contains everything EXCEPT /home and
is for example 50GB big and /home is another drive which is 2TB big.

What you need is to make your hadoop understand that it should use /home
(to be precise a folder in /home and not the complete partition) as hdfs
storage space. Now I will let the other people in the thread disscuss with
you about the technicalities of setting that parameter in the right config
file, as I don't have the knowledge about this specific matter.

Regards,
LLoyd

On 8 November 2015 at 00:00, Adaryl "Bob" Wakefield, MBA <
adaryl.wakefi...@hotmail.com> wrote:

> No it’s flat out saying that that config cannot be set with anything
> starting with /home.
>
> Adaryl "Bob" Wakefield, MBA
> Principal
> Mass Street Analytics, LLC
> 913.938.6685
> www.linkedin.com/in/bobwakefieldmba
> Twitter: @BobLovesData
>
> *From:* Naganarasimha G R (Naga) <garlanaganarasi...@huawei.com>
> *Sent:* Thursday, November 05, 2015 10:58 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: hadoop not using whole disk for HDFS
>
> Hi Bob,
>
> I am suspecting Ambari would not be allowing to create a folder directly
> under */home*, might be it will allow */home/<user_name>/hdfs*, since
> directories under /home is expected to be users home dir.
>
> Regards,
> + Naga
> ------------------------------
> *From:* Naganarasimha G R (Naga) [garlanaganarasi...@huawei.com]
> *Sent:* Friday, November 06, 2015 09:34
> *To:* user@hadoop.apache.org
> *Subject:* RE: hadoop not using whole disk for HDFS
>
> Thanks Brahma, dint realize he might have configured both directories and
> i was assuming bob has configured single new directory "/hdfs/data".
> So virtually its showing additional space,
> *manually try to add a data dir in /home, for your usecase, and restart
> datanodes.*
> Not sure about the impacs in Ambari but worth a try! , more permanent
> solution would be better remount
> Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-home 2.7T
> 33M 2.7T 1% /home
> ------------------------------
> *From:* Brahma Reddy Battula [brahmareddy.batt...@huawei.com]
> *Sent:* Friday, November 06, 2015 08:19
> *To:* user@hadoop.apache.org
> *Subject:* RE: hadoop not using whole disk for HDFS
>
>
> For each configured *dfs.datanode.data.dir* , HDFS thinks its in separate
> partiotion and counts the capacity separately. So when another dir is added
> /hdfs/data, HDFS thinks new partition is added, So it increased the
> capacity 50GB per node. i.e. 100GB for 2 Nodes.
>
> Not allowing /home directory to configure for data.dir might be ambari's
> constraint, instead you can *manually try to add a data dir* in /home,
> for your usecase, and restart datanodes.
>
>
>
> Thanks & Regards
>
>  Brahma Reddy Battula
>
>
>
>
> ------------------------------
> *From:* Naganarasimha G R (Naga) [garlanaganarasi...@huawei.com]
> *Sent:* Friday, November 06, 2015 7:20 AM
> *To:* user@hadoop.apache.org
> *Subject:* RE: hadoop not using whole disk for HDFS
>
> Hi Bob,
>
>
>
> *1. I wasn’t able to set the config to /home/hdfs/data. I got an error
> that told me I’m not allowed to set that config to the /home directory. So
> I made it /hdfs/data.*
>
> *Naga : *I am not sure about the HDP Distro but if you make it point to 
> */hdfs/data,
> *still it will be pointing to the root mount itself i.e.
>
> *    /dev/mapper/centos-root* *50G* *12G* *39G* *23%* */*
>
> Other Alternative is to mount the drive to some other folder other than
> /home and then try.
>
>
> *2. When I restarted, the space available increased by a whopping 100GB.*
>
> *Naga : *I am particularly not sure how this happened may be you can
> again recheck if you enter the command *"df -h <path of the NM data dir
> configured>" *you will find out how much disk space is available on the
> related mount for which the path is configured.
>
>
>
> Regards,
>
> + Naga
>
>
>
>
>
>
> ------------------------------
> *From:* Adaryl "Bob" Wakefield, MBA [adaryl.wakefi...@hotmail.com]
> *Sent:* Friday, November 06, 2015 06:54
> *To:* user@hadoop.apache.org
> *Subject:* Re: hadoop not using whole disk for HDFS
>
> Is there a maximum amount of disk space that HDFS will use? Is 100GB that
> max? When we’re supposed to be dealing with “big data” why is the amount of
> data to be held on any one box such a small number when you’ve got
> terabytes available?
>
> Adaryl "Bob" Wakefield, MBA
> Principal
> Mass Street Analytics, LLC
> 913.938.6685
> www.linkedin.com/in/bobwakefieldmba
> Twitter: @BobLovesData
>
> *From:* Adaryl "Bob" Wakefield, MBA <adaryl.wakefi...@hotmail.com>
> *Sent:* Wednesday, November 04, 2015 4:38 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: hadoop not using whole disk for HDFS
>
> This is an experimental cluster and there isn’t anything I can’t lose. I
> ran into some issues. I’m running the Hortonworks distro and am managing
> things through Ambari.
>
> 1. I wasn’t able to set the config to /home/hdfs/data. I got an error that
> told me I’m not allowed to set that config to the /home directory. So I
> made it /hdfs/data.
> 2. When I restarted, the space available increased by a whopping 100GB.
>
>
>
> Adaryl "Bob" Wakefield, MBA
> Principal
> Mass Street Analytics, LLC
> 913.938.6685
> www.linkedin.com/in/bobwakefieldmba
> Twitter: @BobLovesData
>
> *From:* Naganarasimha G R (Naga) <garlanaganarasi...@huawei.com>
> *Sent:* Wednesday, November 04, 2015 4:26 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: hadoop not using whole disk for HDFS
>
>
> Better would be to stop the daemons and copy the data from */hadoop/hdfs/data
> *to */home/hdfs/data *, reconfigure *dfs.datanode.data.dir* to 
> */home/hdfs/data
> *and then start the daemons. If the data is comparitively less !
>
> Ensure you have the backup if have any critical data !
>
>
>
> Regards,
>
> + Naga
> ------------------------------
> *From:* Adaryl "Bob" Wakefield, MBA [adaryl.wakefi...@hotmail.com]
> *Sent:* Thursday, November 05, 2015 03:40
> *To:* user@hadoop.apache.org
> *Subject:* Re: hadoop not using whole disk for HDFS
>
> So like I can just create a new folder in the home directory like:
> home/hdfs/data
> and then set dfs.datanode.data.dir to:
> /hadoop/hdfs/data,home/hdfs/data
>
> Restart the node and that should do it correct?
>
> Adaryl "Bob" Wakefield, MBA
> Principal
> Mass Street Analytics, LLC
> 913.938.6685
> www.linkedin.com/in/bobwakefieldmba
> Twitter: @BobLovesData
>
> *From:* Naganarasimha G R (Naga) <garlanaganarasi...@huawei.com>
> *Sent:* Wednesday, November 04, 2015 3:59 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: hadoop not using whole disk for HDFS
>
>
> Hi Bob,
>
>
>
> Seems like you have configured to disk dir to be other than an folder in*
> /home,* if so try creating another folder and add to
> *"dfs.datanode.data.dir"* seperated by comma instead of trying to reset
> the default.
>
> And its also advised not to use the root partition "/" to be configured
> for HDFS data dir, if the Dir usage hits the maximum then OS might fail to
> function properly.
>
>
>
> Regards,
>
> + Naga
> ------------------------------
> *From:* P lva [ruvi...@gmail.com]
> *Sent:* Thursday, November 05, 2015 03:11
> *To:* user@hadoop.apache.org
> *Subject:* Re: hadoop not using whole disk for HDFS
>
> What does your dfs.datanode.data.dir point to ?
>
>
> On Wed, Nov 4, 2015 at 4:14 PM, Adaryl "Bob" Wakefield, MBA <
> adaryl.wakefi...@hotmail.com> wrote:
>
>> Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 50G
>> 12G 39G 23% / devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm
>> tmpfs 16G 1.4G 15G 9% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/sda2
>> 494M 123M 372M 25% /boot /dev/mapper/centos-home 2.7T 33M 2.7T 1% /home
>>
>> That’s from one datanode. The second one is nearly identical. I
>> discovered that 50GB is actually a default. That seems really weird. Disk
>> space is cheap. Why would you not just use most of the disk and why is it
>> so hard to reset the default?
>>
>> Adaryl "Bob" Wakefield, MBA
>> Principal
>> Mass Street Analytics, LLC
>> 913.938.6685
>> www.linkedin.com/in/bobwakefieldmba
>> Twitter: @BobLovesData
>>
>> *From:* Chris Nauroth <cnaur...@hortonworks.com>
>> *Sent:* Wednesday, November 04, 2015 12:16 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: hadoop not using whole disk for HDFS
>>
>> How are those drives partitioned?  Is it possible that the directories
>> pointed to by the dfs.datanode.data.dir property in hdfs-site.xml reside on
>> partitions that are sized to only 100 GB?  Running commands like df would
>> be a good way to check this at the OS level, independently of Hadoop.
>>
>> --Chris Nauroth
>>
>> From: MBA <adaryl.wakefi...@hotmail.com>
>> Reply-To: "user@hadoop.apache.org" <user@hadoop.apache.org>
>> Date: Tuesday, November 3, 2015 at 11:16 AM
>> To: "user@hadoop.apache.org" <user@hadoop.apache.org>
>> Subject: Re: hadoop not using whole disk for HDFS
>>
>> Yeah. It has the current value of 1073741824 which is like 1.07 gig.
>>
>> B.
>> *From:* Chris Nauroth <cnaur...@hortonworks.com>
>> *Sent:* Tuesday, November 03, 2015 11:57 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: hadoop not using whole disk for HDFS
>>
>> Hi Bob,
>>
>> Does the hdfs-site.xml configuration file contain the property
>> dfs.datanode.du.reserved?  If this is defined, then the DataNode
>> intentionally will not use this space for storage of replicas.
>>
>> <property>
>>   <name>dfs.datanode.du.reserved</name>
>>   <value>0</value>
>>   <description>Reserved space in bytes per volume. Always leave this much
>> space free for non dfs use.
>>   </description>
>> </property>
>>
>> --Chris Nauroth
>>
>> From: MBA <adaryl.wakefi...@hotmail.com>
>> Reply-To: "user@hadoop.apache.org" <user@hadoop.apache.org>
>> Date: Tuesday, November 3, 2015 at 10:51 AM
>> To: "user@hadoop.apache.org" <user@hadoop.apache.org>
>> Subject: hadoop not using whole disk for HDFS
>>
>> I’ve got the Hortonworks distro running on a three node cluster. For some
>> reason the disk available for HDFS is MUCH less than the total disk space.
>> Both of my data nodes have 3TB hard drives. Only 100GB of that is being
>> used for HDFS. Is it possible that I have a setting wrong somewhere?
>>
>> B.
>>
>
>

Reply via email to