Mathieu,

there is no rocket science there. Essentially creates dataframe and then
call fit from ML pipeline. The thing which I do not understand is how the
parallelization is done in terms of ML algorithm. Is it based on parallel
factor of the dataframe? Because ML algorithm doesn't offer such setting
asfaik. There is only notion of max depth, prunning etc. bot none concerns
with parallelization.

On 4 July 2016 at 17:51, Mathieu Longtin <math...@closetwork.org> wrote:

> When the driver is running out of memory, it usually means you're loading
> data in a non parallel way (without using RDD). Make sure anything that
> requires non trivial amount of memory is done by an RDD. Also, the default
> memory for everything is 1GB, which may not be enough for you.
>
> On Mon, Jul 4, 2016 at 11:44 AM Mich Talebzadeh <mich.talebza...@gmail.com>
> wrote:
>
>> Hi Jakub,
>>
>> In standalone mode Spark does the resource management. Which version of
>> Spark are you running?
>>
>> How do you define your SparkConf() parameters for example setMaster etc.
>>
>> From
>>
>> spark-submit --driver-class-path spark/sqljdbc4.jar --class DemoApp
>> SparkPOC.jar 10 4.3
>>
>> I did not see any executor, memory allocation, so I assume you are
>> allocating them somewhere else?
>>
>> HTH
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 4 July 2016 at 16:31, Jakub Stransky <stransky...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> I have a spark cluster consisting of 4 nodes in a standalone mode,
>>> master + 3 workers nodes with configured available memory and cpus etc.
>>>
>>> I have an spark application which is essentially a MLlib pipeline for
>>> training a classifier, in this case RandomForest  but could be a
>>> DecesionTree just for the sake of simplicity.
>>>
>>> But when I submit the spark application to the cluster via spark submit
>>> it is running out of memory. Even though the executors are "taken"/created
>>> in the cluster they are esentially doing nothing ( poor cpu, nor memory
>>> utilization) while the master seems to do all the work which finally
>>> results in OOM.
>>>
>>> My submission is following:
>>> spark-submit --driver-class-path spark/sqljdbc4.jar --class DemoApp
>>> SparkPOC.jar 10 4.3
>>>
>>> I am submitting from the master node.
>>>
>>> By default it is running in client mode which the driver process is
>>> attached to spark-shell.
>>>
>>> Do I need to set up some settings to make MLlib algos parallelized and
>>> distributed as well or all is driven by parallel factor set on dataframe
>>> with input data?
>>>
>>> Essentially it seems that all work is just done on master and the rest
>>> is idle.
>>> Any hints what to check?
>>>
>>> Thx
>>> Jakub
>>>
>>>
>>>
>>>
>> --
> Mathieu Longtin
> 1-514-803-8977
>



-- 
Jakub Stransky
cz.linkedin.com/in/jakubstransky

Reply via email to