Hello I'm currently running R code in an executor via JRI. Because R is single-threaded, any call to R needs to be wrapped in a `synchronized`. Now I can use a bit more than one core per executor, which is undesirable. Is there a way to tell spark that this specific application (or even specific UDF) needs multiple JVMs? Or should I switch from JRI to a pipe-based (slower) setup?
Cheers, Simon --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org