To my understanding configuration file names should be unique across
services installed in the cluster.
To confirm this, you may open the developer console and see what the error
is. If it is stuck on the loading icon, it is highly likely that there is
an error thrown by JS, the result of which configs were not loaded.

On Mon, Apr 18, 2016 at 10:43 PM, Souvik Sarkhel <[email protected]>
wrote:

> Hi,
>
> I have created a custom service named *HDFSYARN* which installs *Hadoop*
> in all the nodes and starts namenode, datanode and resource manager in yarn
> mode. I want the user to be able to modify the following .xml files:
>
>
>
> *capacity-scheduler.xmlcore-site.xmlmapred-site.xml*
>
> *yarn-site.xml*
> *hdfs-site.xml*
>
>  I have placed the followed the following folder structure
> *metainfo.xml*
> |_ *configuration*
>
> *capacity-scheduler.xml*
> *      core-site.xml*
> *      mapred-site.xml*
> *      yarn-site.xml*
> *      hdfs-site.xml*
>
> *|_ package*
>
> *        |_ scripts*
>
> *                 master.py*
>
> *                  slave.py*
>
> and my *metainfo.xml* file looks like this
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *<?xml version="1.0"?><metainfo>  <schemaVersion>2.0</schemaVersion>
> <services>    <service>      <name>HDFSYARN</name>      <displayName>HDFS
> YARN</displayName>      <comment>HDFS is a Java-based file system that
> provides scalable and reliable data storage, it is designed to span large
> clusters of commodity servers</comment>      <version>2.6.0</version>
> <components>        <component>
> <name>HDFS_NAMENODE</name>          <displayName>HDFS
> NameNode</displayName>          <category>MASTER</category>
> <cardinality>1</cardinality>
> <timelineAppid>HDFSYARN</timelineAppid>
> <dependencies>                    <dependency>
> <name>TOMCAT/TOMCAT_SLAVE</name>
> <scope>cluster</scope>
> <auto-deploy>
> <enabled>true</enabled>
> </auto-deploy>                </dependency>
> </dependencies>          <commandScript>
> <script>scripts/master.py</script>
> <scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
> </commandScript>        </component>        <component>
> <name>HDFS_DATANODE</name>          <displayName>HDFS
> DataNode</displayName>          <cardinality>0+</cardinality>
> <category>SLAVE</category>
> <timelineAppid>HDFSYARN</timelineAppid>          <commandScript>
> <script>scripts/slave.py</script>
> <scriptType>PYTHON</scriptType>            <timeout>1200</timeout>
> </commandScript>        </component>   </components>
> <osSpecifics>        <osSpecific>
> <osFamily>any</osFamily>          <!-- note: use osType rather than
> osFamily for Ambari 1.5.0 and 1.5.1 -->          <packages>
> <package>              <name>hadoop-2.6.0</name>
> </package>          </packages>        </osSpecific>
> </osSpecifics>      <requiredServices>
> <service>TOMCAT</service>      </requiredServices>
> <configuration-dependencies>
> <config-type>core-site</config-type>
> <config-type>hdfs-site</config-type>
> <config-type>mapred-site</config-type>
> <config-type>capacity-scheduler</config-type>
> <config-type>yarn-site</config-type>      </configuration-dependencies>
> </service>  </services></metainfo>*
>
> But the moment I place *hdfs-site.xml* and *yarn-site.xml* in
> configuration folder and try to add the service it gets stuck at Customize
> Services window
> [image: Inline image 1]
> and when my configuration folder doesn't contains those two files
> everything works properly.
> Is it because *HDP* stack also has *HDFS* and *YARN* services and somehow
> Ambari is still fetching some dependencies from those services instead of
> the custom defined service.?
> Thanking you in advance
>
> --
> Souvik Sarkhel
>

Reply via email to