Indhumathi27 commented on a change in pull request #3584: [CARBONDATA-3718] 
Support SegmentLevel MinMax for better Pruning and less driver memory usage for 
cache
URL: https://github.com/apache/carbondata/pull/3584#discussion_r389660374
 
 

 ##########
 File path: docs/configuration-parameters.md
 ##########
 @@ -146,6 +146,7 @@ This section provides the details of all the 
configurations required for the Car
 | carbon.query.prefetch.enable | true | By default this property is true, so 
prefetch is used in query to read next blocklet asynchronously in other thread 
while processing current blocklet in main thread. This can help to reduce CPU 
idle time. Setting this property false will disable this prefetch feature in 
query. |
 | carbon.query.stage.input.enable | false | Stage input files are data files 
written by external applications (such as Flink), but have not been loaded into 
carbon table. Enabling this configuration makes query to include these files, 
thus makes query on latest data. However, since these files are not indexed, 
query maybe slower as full scan is required for these files. |
 | carbon.driver.pruning.multi.thread.enable.files.count | 100000 | To prune in 
multi-thread when total number of segment files for a query increases beyond 
the configured value. |
+| carbon.load.all.indexes.to.cache | true | Setting this configuration to 
false, will prune and load only matched segment indexes to cache using segment 
minmax information, which decreases the usage of driver memory.  |
 
 Review comment:
   it is for session level carbon property. Cannot change it dynamically. So, 
once we restart the session, cache will be automatically cleared

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to