Hi
the input data size is less than 10M. The task result size should be less I
think. Because I am doing aggregation on the data
At 2016-04-20 16:18:31, "Jeff Zhang" wrote:
Do you mean the input data size as 10M or the task result size ?
>>> But my way is to setup
Do you mean the input data size as 10M or the task result size ?
>>> But my way is to setup a forever loop to handle continued income data. Not
sure if it is the right way to use spark
Not sure what this mean, do you use spark-streaming, for doing batch job in
the forever loop ?
On Wed, Apr
Seems it is OOM in driver side when fetching task result.
You can try to increase spark.driver.memory and spark.driver.maxResultSize
On Tue, Apr 19, 2016 at 4:06 PM, 李明伟 wrote:
> Hi Zhan Zhang
>
>
> Please see the exception trace below. It is saying some GC overhead limit
>