Your Hadoop configuration is set to look for this file to determine racks. Is the file present on cluster nodes? If not, look at your hdfs-site.xml and remove the setting for a rack topology script there (or it might be in core-site.xml).
Matei > On Nov 19, 2014, at 12:13 PM, Arun Luthra <arun.lut...@gmail.com> wrote: > > I'm trying to run Spark on Yarn on a hortonworks 2.1.5 cluster. I'm getting > this error: > > 14/11/19 13:46:34 INFO cluster.YarnClientSchedulerBackend: Registered > executor: > Actor[akka.tcp://sparkExecutor@#############/user/Executor#-2027837001 > <tel:2027837001>] with ID 42 > 14/11/19 13:46:34 WARN net.ScriptBasedMapping: Exception running > /etc/hadoop/conf/rack-topology.sh 10.0.28.130 > java.io.IOException: Cannot run program "/etc/hadoop/conf/rack-topology.sh" > (in directory "###########"): error=2, No such file or directory > > The rack-topology script is not on system (find / 2>/dev/null -name > "rack-topology"). > > Any possibly solution? > > Arun Luthra