Hi team,

In a single Linux node, I would like to set up Rstudio with Sparkly. Three
to four people make up the dev team.
I am aware of the single-node spark cluster's constraints. When there is a
resource problem with Spark, I want to know when more users join in to use
Sparkly in Rstudio. It should simply retain the new jobs in the queue
rather than crashing.
I think this is not only specific to Rstudio/SparklyR. Even applicable for
Spark/Pyspark with a single node cluster.
Please share the optimal method for allocating Spark's resources in this
scenario.


Thanks,
Elango

Reply via email to