Hi Al,,

Following the threads in spark forum, I decided to write up on
configuration of Spark including allocation of resources and configuration
of driver, executors, threads, execution of Spark apps and general
troubleshooting taking into account the allocation of resources for Spark
applications and OS tools at the disposal.

Since the most widespread configuration as I notice is with "Spark
Standalone Mode", I have decided to write these notes starting with
Standalone and later on moving to Yarn


   -

   *Standalone *– a simple cluster manager included with Spark that makes
   it easy to set up a cluster.
   -

   *YARN* – the resource manager in Hadoop 2.


I would appreciate if anyone interested in reading and commenting to get in
touch with me directly on mich.talebza...@gmail.com so I can send the
write-up for their review and comments.


Just to be clear this is not meant to be any commercial proposition or
anything like that. As I seem to get involved with members troubleshooting
issues and threads on this topic, I thought it is worthwhile writing a note
about it to summarise the findings for the benefit of the community.


Regards.


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com

Reply via email to