mapleFU commented on code in PR #37400:
URL: https://github.com/apache/arrow/pull/37400#discussion_r1946032202
##########
cpp/src/parquet/column_writer.cc:
##########
@@ -1791,11 +1810,15 @@ Status
TypedColumnWriterImpl<DType>::WriteArrowDictionary(
&exec_ctx));
referenced_dictionary = referenced_dictionary_datum.make_array();
}
-
- int64_t non_null_count = chunk_indices->length() -
chunk_indices->null_count();
- page_statistics_->IncrementNullCount(num_chunk_levels - non_null_count);
- page_statistics_->IncrementNumValues(non_null_count);
- page_statistics_->Update(*referenced_dictionary, /*update_counts=*/false);
+ if (page_statistics_ != nullptr) {
+ int64_t non_null_count = chunk_indices->length() -
chunk_indices->null_count();
+ page_statistics_->IncrementNullCount(num_chunk_levels - non_null_count);
+ page_statistics_->IncrementNumValues(non_null_count);
+ page_statistics_->Update(*referenced_dictionary,
/*update_counts=*/false);
+ }
+ if (bloom_filter_ != nullptr) {
+ UpdateBloomFilterArray(*referenced_dictionary);
Review Comment:
> If we can accept the bloom filter to contain more values than it should
have, data.dictionary() seems to be sufficient.
Emm this might be a good way but also making the dictionary huge
> BTW, if we already have dictionary encoding, should we simply disable
building bloom filter in this case?
AFAIK this is different?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]