-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,

I am building a mesos cluster for the purposes of using it to run
spark workloads (in addition to other frameworks). I am under the
impression that it is preferable/recommended to run hdfs datanode
process, spark slave on the same physical node (or EC2 instance or VM).

My question is: What is the recommended resource splitting? How much
memory and CPU should I preallocate for HDFS and how much should I set
aside as allocatable by mesos? In addition, is there some
rule-of-thumb recommendation around this?

- -- Ankur Chauhan
-----BEGIN PGP SIGNATURE-----

iQEcBAEBAgAGBQJVPgiLAAoJEOSJAMhvLp3L2fEIANkmfTjzEhjQ1IEc5W59F8sP
mT06qpxnd3XPg8DFOPIKxxCAtsVU1fImAOnFYobi9mQlEzcEbDtPMoLh0uStFIIS
cuorK4j0Am8Y1xxYa8BhKuWEtpYoFtSEYIF5eHe5vNlt5FlEvs3vTJ3N/zFbxVsq
I0FQH8r9u27pBJ9/rACyruYhgh/b5Tc6s39uKDFFJnhDWezMF2sF1WCgcIbZRP4+
PAhqLNPuVNAPcpi9JAe8u91d8yeFFVb/00mO2am2cr0BcHnfeWq6ZFftZUQrX3PK
FvD7FpfeFLCS5FinDqMHp2nkGetlJMQsIYRzvn3tmim8OeE6ppFsO0LnRNEqEtQ=
=I22x
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to