[ https://issues.apache.org/jira/browse/PARQUET-1239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16382757#comment-16382757 ]
Wes McKinney commented on PARQUET-1239: --------------------------------------- I moved this issue to Apache Parquet because that's where the fix will need to be implemented. Thank you! > [C++] Arrow table reads error when overflowing capacity of BinaryArray > ---------------------------------------------------------------------- > > Key: PARQUET-1239 > URL: https://issues.apache.org/jira/browse/PARQUET-1239 > Project: Parquet > Issue Type: Bug > Components: parquet-cpp > Affects Versions: cpp-1.4.0 > Reporter: Chris Ellison > Priority: Major > Fix For: cpp-1.5.0 > > > When reading a parquet file with binary data > 2 GiB, we get an ArrowIOError > due to it not creating chunked arrays. Reading each row group individually > and then concatenating the tables works, however. > > {code:java} > import pandas as pd > import pyarrow as pa > import pyarrow.parquet as pq > x = pa.array(list('1' * 2**30)) > demo = 'demo.parquet' > def scenario(): > t = pa.Table.from_arrays([x], ['x']) > writer = pq.ParquetWriter(demo, t.schema) > for i in range(2): > writer.write_table(t) > writer.close() > pf = pq.ParquetFile(demo) > # pyarrow.lib.ArrowIOError: Arrow error: Invalid: BinaryArray cannot > contain more than 2147483646 bytes, have 2147483647 > t2 = pf.read() > # Works, but note, there are 32 row groups, not 2 as suggested by: > # > https://arrow.apache.org/docs/python/parquet.html#finer-grained-reading-and-writing > tables = [pf.read_row_group(i) for i in range(pf.num_row_groups)] > t3 = pa.concat_tables(tables) > scenario() > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)