I am still a little bit confused about workers, executors and JVMs in
standalone mode.
Are worker processes and executors independent JVMs or do executors run
within the worker JVM?
I have some memory-rich nodes (192GB) and I would like to avoid deploying
massive JVMs due to well known performance
I am confused as to whether avro support was merged into Spark 1.2 or it is
still an independent library.
I see some people writing sqlContext.avroFile similarly to jsonFile but this
does not work for me, nor do I see this in the Scala docs.
--
View this message in context: