[ 
https://issues.apache.org/jira/browse/SPARK-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dylanzhou updated SPARK-13183:
------------------------------
    Comment: was deleted

(was: [~srowen] i donot know this is a memory leak problem, i get heap memory 
error java.lang.OutOfMemoryError:Java for heap space . When I try to increase 
driver memory, just streaming programs work a little longer, in my opinion 
byte[] objects cannot be reclaimed by the GC,These object cache is spark SQL 
table rows .When I increase the amount of data that flows into Kafka, memory 
consumption and faster。 Can you give me some advice? Here is my question, thank 
you!
http://apache-spark-user-list.1001560.n3.nabble.com/the-memory-leak-problem-of-use-sparkstreamimg-and-sparksql-with-kafka-in-spark-1-4-1-td26231.html)

> Bytebuffers occupy a large amount of heap memory
> ------------------------------------------------
>
>                 Key: SPARK-13183
>                 URL: https://issues.apache.org/jira/browse/SPARK-13183
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.1
>            Reporter: dylanzhou
>
> When I used sparkstreamimg and sparksql, i cache the table,found that old gen 
> increases very fast and full GC is very frequent, running for a period of 
> time will be out of memory, after analysis of heap memory, found that there 
> are a large number of org.apache.spark.sql.columnar.ColumnBuilder[38] @ 
> 0xd022a0b8, takes up 90% of the space, look at the source is HeapByteBuffer 
> occupy, don't know why these objects are not released, had been waiting for 
> GC to recycle;if i donot use cache table, there will be no this problem, but 
> I need to repeatedly query this table do



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to