[ 
https://issues.apache.org/jira/browse/KUDU-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17047539#comment-17047539
 ] 

caiconghui commented on KUDU-3056:
----------------------------------

It works well [~granthenke], Thanks!

> kudu-spark HdrHistogramAccumulator is too big, and make spark  job failed 
> --------------------------------------------------------------------------
>
>                 Key: KUDU-3056
>                 URL: https://issues.apache.org/jira/browse/KUDU-3056
>             Project: Kudu
>          Issue Type: Bug
>          Components: spark
>    Affects Versions: 1.9.0
>            Reporter: caiconghui
>            Assignee: Grant Henke
>            Priority: Major
>             Fix For: 1.12.0
>
>         Attachments: heap1.png, heap2.png, heap3.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> in production envrinment, we use kudu-spark to read kudu table, but  even we 
> don't use the 
> HdrHistogramAccumulator, the HdrHistogramAccumulator stored in an array  is 
> stiil so big,
> totoal of them are almost 2 MB, so that  when the number of kudu-spark 
> task(for read kudu data and shuffle) is more than 900, the spark job failed, 
> and the follwing error occured,
>  
> Job aborted due to stage failure: Total size of serialized results of 1413 
> tasks (3.0 GB) is bigger than spark.driver.maxResultSize (3.0 GB)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to