jozahner commented on code in PR #10829:
URL: https://github.com/apache/nifi/pull/10829#discussion_r2743593538


##########
nifi-extension-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroReader.java:
##########
@@ -60,12 +60,27 @@ public class AvroReader extends SchemaRegistryService 
implements RecordReaderFac
             .required(true)
             .build();
 
+    static final PropertyDescriptor FAST_READER_ENABLED = new 
PropertyDescriptor.Builder()
+            .name("Fast Reader Enabled")
+            .description("""
+                    When enabled, the Avro library uses an optimized reader 
implementation that improves read performance
+                    by creating a detailed execution plan at initialization. 
However, this optimization can lead to
+                    significantly higher memory consumption, especially when 
using schema inference. If OutOfMemory errors
+                    occur during Avro processing, consider disabling this 
option.""")
+            .addValidator(StandardValidators.BOOLEAN_VALIDATOR)
+            .allowableValues("true", "false")
+            .defaultValue("true")

Review Comment:
   We have a max. heap size of 31GB configured and under normal circumstance we 
use around 5GB heap memory. So I don't think it's related to the maximum heap 
size settings.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to