[ 
https://issues.apache.org/jira/browse/PARQUET-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Szadovszky updated PARQUET-353:
-------------------------------------
    Fix Version/s: 1.8.2

> Compressors not getting recycled while writing parquet files, causing memory 
> leak
> ---------------------------------------------------------------------------------
>
>                 Key: PARQUET-353
>                 URL: https://issues.apache.org/jira/browse/PARQUET-353
>             Project: Parquet
>          Issue Type: Bug
>          Components: parquet-mr
>    Affects Versions: 1.6.0, 1.7.0, 1.8.0
>            Reporter: Nitin Goyal
>            Assignee: Nitin Goyal
>            Priority: Major
>             Fix For: 1.9.0, 1.8.2
>
>
> Compressors are not getting recycled while writing parquet files. This is 
> causing native/physical memory leak in my spark app which is parquet write 
> intensive since its creating new compressors everytime i write parquet files.
> The actual code issue is that we are creating 'codecFactory' in 
> 'getRecordWriter' method of ParquetOutputFormat.java but not calling 
> codecFactory.release() which is responsible for recycling compressors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to