Hi All,

I've been using spark standalone for a while and now its time for me to
install HDFS. If a spark worker goes down then Spark master restarts the
worker similarly if a  datanode process goes down it looks like it is not
the namenode job to restart the datanode and if so, 1)  should I use
process supervisor like monit for datanodes? 2) Also, is it a standard
process to colocate Spark master and Namenode on the same machine and
colocate SparkWorkers and DataNodes on the same machine ?

Thanks!

Reply via email to