pratyakshsharma commented on a change in pull request #4210:
URL: https://github.com/apache/carbondata/pull/4210#discussion_r732718290
##########
File path: docs/configuration-parameters.md
##########
@@ -119,6 +136,7 @@ This section provides the details of all the configurations
required for the Car
| carbon.enable.range.compaction | true | To configure Range-based Compaction
to be used or not for RANGE_COLUMN. If true after compaction also the data
would be present in ranges. |
| carbon.si.segment.merge | false | Making this true degrades the LOAD
performance. When the number of small files increase for SI segments(it can
happen as number of columns will be less and we store position id and reference
columns), user can either set to true which will merge the data files for
upcoming loads or run SI refresh command which does this job for all segments.
(REFRESH INDEX <index_table>) |
| carbon.partition.data.on.tasklevel | false | When enabled, tasks launched
for Local sort partition load will be based on one node one task. Compaction
will be performed based on task level for a partition. Load performance might
be degraded, because, the number of tasks launched is equal to number of nodes
in case of local sort. For compaction, memory consumption will be less, as more
number of tasks will be launched for a partition |
+| carbon.minor.compaction.size | (none) | Minor compaction originally worked
based on the number of segments (by default 4). However in that scenario, there
was no control over the size of segments to be compacted. This parameter was
introduced to exclude segments whose size is greater than the configured
threshold so that the overall IO and time taken decreases |
Review comment:
dynamically within spark session itself? @vikramahuja1001
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]