jorisvandenbossche commented on pull request #6979: URL: https://github.com/apache/arrow/pull/6979#issuecomment-754686227
BTW, the Dataset API also gives a way to get an iterator over record batches (with `Dataset.to_batches()`). The strange thing is that this seems to have another logic of how many rows are included in each batch when crossing row groups, while in the end it is also using `GetRecordBatchReader` with `batch_size` set in the reader properties: ``` In [116]: table = pa.table({'str': [str(x) for x in range(size)], 'str_cat': pd.Categorical([str(x) for x in range(size)])}) In [117]: pq.write_table(table, "test.parquet", row_group_size=100) In [118]: f = pq.ParquetFile("test.parquet") In [119]: [b.num_rows for b in f.iter_batches(batch_size=80)] Out[119]: [80, 20, 60, 40, 40, 60] In [120]: [b.num_rows for b in ds.dataset("test.parquet").to_batches()] Out[120]: [100, 100, 100] In [121]: [b.num_rows for b in ds.dataset("test.parquet").to_batches(batch_size=80)] Out[121]: [80, 20, 80, 20, 80, 20] ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org