Omkar - There is no preference. Name them and put them where fits your preference. If you don’t have a preference, go with the naming used in the documentation.

--
Hortonworks - We do Hadoop

Sean Roberts
Partner Solutions Engineering - EMEA
@seano
On 8 May 2015, at 15:02, Joshi Omkar wrote:

Hi Sean,

Thanks for the inputs.

What is the normal/ideal location to set a mount point ?

Yes, I plan to use Ambari for the configs. once the disks are ready to be used for the datanodes.

Regards,
Omkar Joshi


From: Sean Roberts [mailto:srobe...@hortonworks.com]
Sent: den 8 maj 2015 15:59
To: user@ambari.apache.org
Subject: Re: Adding disks and partitions


Omkar - Yes, with HDFS (nothing HDP specific) you mount each drive separately.

Where you mount them doesn’t matter, but never set a mount point under /dev.

This talks to the various configuration settings to update to match those dirs, but you would do them from Ambari instead of editing the configuration files manually:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.4/HDP_Man_Install_v224/index.html#ref-1f93da57-d4dc-4de9-8e9c-34b1442e77f8

--
Hortonworks - We do Hadoop

Sean Roberts
Partner Solutions Engineering - EMEA
@seano
On 8 May 2015, at 14:46, Joshi Omkar wrote:

Hi,

I have 600GB X 8 disks on each machine that can be used for HDP.

1 disk is used for the /root, /home etc. so I'm now left with 7 disks.

If I understand correctly from the HDP recoshttp://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_cluster-planning-guide/content/file_system.html. , I can MOUNT these 7 disks as:

/dev/grid/0
/dev/grid/1
.
.
/dev/grid/6

Where each disk will have a ONE SINGLE BIG partition with ext3/ext4.

Regards,
Omkar Joshi

Reply via email to