[ 
https://issues.apache.org/jira/browse/ARROW-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joris Van den Bossche updated ARROW-5086:
-----------------------------------------
    Labels: parquet  (was: )

> [Python] Space leak in  ParquetFile.read_row_group()
> ----------------------------------------------------
>
>                 Key: ARROW-5086
>                 URL: https://issues.apache.org/jira/browse/ARROW-5086
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.12.1
>            Reporter: Jakub Okoński
>            Priority: Major
>              Labels: parquet
>         Attachments: all.png, all.png
>
>
> I have a code pattern like this:
>  
> reader = pq.ParquetFile(path)
> for ix in range(0, reader.num_row_groups):
>     table = reader.read_row_group(ix, columns=self._columns)
>     # operate on table
>  
> But it leaks memory over time, only releasing it when the reader object is 
> collected. Here's a workaround
>  
> num_row_groups = pq.ParquetFile(path).num_row_groups
> for ix in range(0, num_row_groups):
>     table = pq.ParquetFile(path).read_row_group(ix, columns=self._columns)
>     # operate on table
>  
> This puts an upper bound on memory usage and is what I'd  expect from the 
> code. I also put gc.collect() to the end of every loop.
>  
> I charted out memory usage for a small benchmark that just copies a file, one 
> row group at a time, converting to pandas and back to arrow on the writer 
> path. Line in black is the first one, using a single reader object. Blue is 
> instantiating a fresh reader in every iteration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to