wecharyu commented on code in PR #48468:
URL: https://github.com/apache/arrow/pull/48468#discussion_r2689107870
##########
cpp/src/parquet/arrow/writer.cc:
##########
@@ -480,17 +485,28 @@ class FileWriterImpl : public FileWriter {
return Status::OK();
};
+ // Max number of rows allowed in a row group.
+ const int64_t max_row_group_length =
this->properties().max_row_group_length();
+ // Max number of bytes allowed in a row group.
+ const int64_t max_row_group_bytes =
this->properties().max_row_group_bytes();
+
int64_t offset = 0;
while (offset < batch.num_rows()) {
- const int64_t batch_size =
- std::min(max_row_group_length - row_group_writer_->num_rows(),
- batch.num_rows() - offset);
- RETURN_NOT_OK(WriteBatch(offset, batch_size));
- offset += batch_size;
-
- // Flush current row group writer and create a new writer if it is full.
- if (row_group_writer_->num_rows() >= max_row_group_length &&
- offset < batch.num_rows()) {
+ int64_t group_rows = row_group_writer_->num_rows();
+ int64_t batch_size =
+ std::min(max_row_group_length - group_rows, batch.num_rows() -
offset);
+ if (group_rows > 0) {
Review Comment:
👍 I originally want to cache the compressed bytes and rows in
`FileWriterImpl` if we need consider all written row groups. Actually they are
already in metadata.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]