I am also curious about the answer.

If you grep the code, you will notice SO MANY python/shell script hard code the 
absolute path /var/xxx  /usr/xxx .  Not everyone can install software under /

Ruhua



From: Ganesh Viswanathan <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Wednesday, March 16, 2016 at 1:17 PM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Manual install locations for Ambari, HDFS and HBase

Hello-
I am trying to setup a hadoop cluster using custom locations. My root fs does 
not have enough storage, but I have a large ephemeral storage disk that I could 
mount and use for installing Ambari and HDFS, HBase.

Is there a configuration setting in Ambari that can help me move all 
Ambari+Hadoop related storage (Ambari scripts, configs, logs, HDFS, HBase, Zkpr 
data, etc.) into  separate drives (for eg., /hadoop, /hbase, /ambari, etc.).

I know it works when building HDFS and HBase from Apache sources and 
customizing the installation. But I updated all the settings in step "Customize 
Services" in Ambari but still see Hadoop conf directories missing and Ambari 
scripts running from /etc, /var etc. Is there a central root.dir setting for 
each of these services (ambari and services deployed by ambari) that can help 
me update the locations?


Thanks!
Ganesh

Reply via email to