writer-jill commented on code in PR #12344:
URL: https://github.com/apache/druid/pull/12344#discussion_r890221756


##########
docs/design/segments.md:
##########
@@ -23,231 +23,198 @@ title: "Segments"
   -->
 
 
-Apache Druid stores its index in *segment files*, which are partitioned by
-time. In a basic setup, one segment file is created for each time
+Apache Druid stores its index in *segment files* partitioned by
+time. In a basic setup, Druid creates one segment file for each time
 interval, where the time interval is configurable in the
 `segmentGranularity` parameter of the
-[`granularitySpec`](../ingestion/ingestion-spec.md#granularityspec).  For 
Druid to
-operate well under heavy query load, it is important for the segment
+[`granularitySpec`](../ingestion/ingestion-spec.md#granularityspec).
+
+For Druid to operate well under heavy query load, it is important for the 
segment
 file size to be within the recommended range of 300MB-700MB. If your
 segment files are larger than this range, then consider either
 changing the granularity of the time interval or partitioning your

Review Comment:
   Updated.



##########
docs/design/segments.md:
##########
@@ -23,231 +23,198 @@ title: "Segments"
   -->
 
 
-Apache Druid stores its index in *segment files*, which are partitioned by
-time. In a basic setup, one segment file is created for each time
+Apache Druid stores its index in *segment files* partitioned by
+time. In a basic setup, Druid creates one segment file for each time
 interval, where the time interval is configurable in the
 `segmentGranularity` parameter of the
-[`granularitySpec`](../ingestion/ingestion-spec.md#granularityspec).  For 
Druid to
-operate well under heavy query load, it is important for the segment
+[`granularitySpec`](../ingestion/ingestion-spec.md#granularityspec).
+
+For Druid to operate well under heavy query load, it is important for the 
segment
 file size to be within the recommended range of 300MB-700MB. If your
 segment files are larger than this range, then consider either
 changing the granularity of the time interval or partitioning your
-data and tweaking the `targetRowsPerSegment` in your `partitionsSpec`
-(a good starting point for this parameter is 5 million rows).  See the
-sharding section below and the 'Partitioning specification' section of
+data and adjusting the `targetRowsPerSegment` in your `partitionsSpec`.

Review Comment:
   Updated.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to