Did any one had issue setting spark.driver.maxResultSize value ?

On Friday, October 30, 2015, karthik kadiyam <karthik.kadiyam...@gmail.com>
wrote:

> Hi Shahid,
>
> I played around with spark driver memory too. In the conf file it was set
> to " --driver-memory 20G " first. When i changed the spark driver
> maxResultSize from default to 2g ,i changed the driver memory to 30G and
> tired too. It gave we same error says "bigger than  (1024.0 MB) " .
> spark.driver.maxResultSize
> One other thing i observed is , in one of the tasks the data its trying to
> process is more than 100 MB and that exceutor and task keeps losing
> connection and doing retry. I tried increase the Tasks by repartition from
> 120 to 240 to 480 also. Still i can see in one of my tasks it still is
> trying to process more than 100 mb. Other task hardly process 1 mb to 10 mb
> , some around 20 mbs, some have 0 mbs .
>
> Any idea how can i try to even the data distribution acrosss multiple
> node.
>
> On Fri, Oct 30, 2015 at 12:09 AM, shahid ashraf <sha...@trialx.com
> <javascript:_e(%7B%7D,'cvml','sha...@trialx.com');>> wrote:
>
>> Hi
>> I guess you need to increase spark driver memory as well. But that should
>> be set in conf files
>> Let me know if that resolves
>> On Oct 30, 2015 7:33 AM, "karthik kadiyam" <karthik.kadiyam...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','karthik.kadiyam...@gmail.com');>> wrote:
>>
>>> Hi,
>>>
>>> In spark streaming job i had the following setting
>>>
>>>             this.jsc.getConf().set("spark.driver.maxResultSize", “0”);
>>> and i got the error in the job as below
>>>
>>> User class threw exception: Job aborted due to stage failure: Total size
>>> of serialized results of 120 tasks (1082.2 MB) is bigger than
>>> spark.driver.maxResultSize (1024.0 MB)
>>>
>>> Basically i realized that as default value is 1 GB. I changed
>>> the configuration as below.
>>>
>>> this.jsc.getConf().set("spark.driver.maxResultSize", “2g”);
>>>
>>> and when i ran the job it gave the error
>>>
>>> User class threw exception: Job aborted due to stage failure: Total size
>>> of serialized results of 120 tasks (1082.2 MB) is bigger than
>>> spark.driver.maxResultSize (1024.0 MB)
>>>
>>> So, basically the change i made is not been considered in the job. so my
>>> question is
>>>
>>> - "spark.driver.maxResultSize", “2g” is this the right way to change or
>>> any other way to do it.
>>> - Is this a bug in spark 1.3 or something or any one had this issue
>>> before?
>>>
>>>
>

Reply via email to