[ 
https://issues.apache.org/jira/browse/SOLR-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16549474#comment-16549474
 ] 

Andrzej Bialecki  edited comment on SOLR-12509 at 7/19/18 4:45 PM:
-------------------------------------------------------------------

This patch implements a new method for shard splitting that uses 
{{HardLinkCopyDirectoryWrapper}}. The old method is still available and used by 
default, and the new method may be selected by using {{splitMethod=link}} 
request parameter (the old method can be explicitly selected with 
{{splitMethod=rewrite}}).

There's also support for a new {{timing}} parameter - when set to true the 
SPLITSHARD command returns a "timing" section with elapsed times for each 
internal phase of the command execution.

I've been testing the new implementation locally and on a cluster of 5 physical 
nodes, using collections ranging from 2 mln up to 22 mln documents (15 GB index 
size). The new method consistently outperforms the old method by a factor of 3 
to 5, depending on the index size and number of replicas.

The downside of the new method is that the resulting sub-shards initially have 
the same size as the original shard - on the shard leader these files are 
hard-linked so they don't consume additional space, but replica nodes still 
need to fetch all that data, which affects the network IO and the initial disk 
consumption on replica nodes, and consequently replica recovery time. The 
upside is that the total time is much shorter and the CPU / IO load on the 
shard leader is negligible, unlike the old method that is very IO- and 
CPU-intensive.

Here are example timings for the old method (note that the SPLITSHARD command 
returns before the remaining replicas have recovered - the parent shard is 
switched to sub-shards only when all replicas are recovered so the total time 
is a sum of the SPLITSHARD and the time for all replicas to recover):
{code:java}
  "timing":{
    "time":1547111.0,
    "checkDiskSpace":{
      "time":14.0},
    "fillRanges":{
      "time":2.0},
    "createSubSlicesAndLeadersInState":{
      "time":4439.0},
    "waitForSubSliceLeadersAlive":{
      "time":1009.0},
    "splitParentCore":{
      "time":1538986.0},
    "applyBufferedUpdates":{
      "time":7.0},
    "identifyNodesForReplicas":{
      "time":1.0},
    "createReplicaPlaceholders":{
      "time":7.0},
    "createCoresForReplicas":{
      "time":2173.0},
    "finalCommit":{
      "time":462.0}},
{code}
After that, sub-shards recovered in 220753 ms, so the total time was ca. 1770 
sec.

 
 And the timings for the new method, with exactly the same initial data layout, 
hardware, etc:
{code:java}
  "timing":{
    "time":15633.0,
    "checkDiskSpace":{
      "time":5.0},
    "fillRanges":{
      "time":2.0},
    "createSubSlicesAndLeadersInState":{
      "time":4411.0},
    "waitForSubSliceLeadersAlive":{
      "time":2.0},
    "splitParentCore":{
      "time":9005.0},
    "identifyNodesForReplicas":{
      "time":0.0},
    "createReplicaPlaceholders":{
      "time":2.0},
    "createCoresForReplicas":{
      "time":2105.0},
    "finalCommit":{
      "time":95.0}},
{code}
After that, sub-shards recovered in 443350 ms, so the total time was ca. 460 
sec.


was (Author: ab):
This patch implements a new method for shard splitting that uses 
{{HardLinkCopyDirectoryWrapper}}. The old method is still available and used by 
default, and the new method may be selected by using {{splitMethod=link}} 
request parameter (the old method can be explicitly selected with 
{{splitMethod=rewrite}}).

There's also support for a new {{timing}} parameter - when set to true the 
SPLITSHARD command returns a "timing" section with elapsed times for each 
internal phase of the command execution.

I've been testing the new implementation locally and on a cluster of 5 physical 
nodes, using collections ranging from 2 mln up to 22 mln documents (15 GB index 
size). The new method consistently outperforms the old method by a factor of 3 
to 5, depending on the index size and number of replicas.

The downside of the new method is that the resulting sub-shards initially have 
the same size as the original shard - on the shard leader these files are 
hard-linked so they don't consume additional space, but replica nodes still 
need to fetch all that data, which affects the network IO and the initial disk 
consumption on replica nodes, and consequently replica recovery time.

Here are example timings for the old method (note that the SPLITSHARD command 
returns before the remaining replicas have recovered - the parent shard is 
switched to sub-shards only when all replicas are recovered so the total time 
is a sum of the SPLITSHARD and the time for all replicas to recover):
{code:java}
  "timing":{
    "time":1547111.0,
    "checkDiskSpace":{
      "time":14.0},
    "fillRanges":{
      "time":2.0},
    "createSubSlicesAndLeadersInState":{
      "time":4439.0},
    "waitForSubSliceLeadersAlive":{
      "time":1009.0},
    "splitParentCore":{
      "time":1538986.0},
    "applyBufferedUpdates":{
      "time":7.0},
    "identifyNodesForReplicas":{
      "time":1.0},
    "createReplicaPlaceholders":{
      "time":7.0},
    "createCoresForReplicas":{
      "time":2173.0},
    "finalCommit":{
      "time":462.0}},
{code}
After that, sub-shards recovered in 220753 ms, so the total time was ca. 1770 
sec.

 
 And the timings for the new method, with exactly the same initial data layout, 
hardware, etc:
{code:java}
  "timing":{
    "time":15633.0,
    "checkDiskSpace":{
      "time":5.0},
    "fillRanges":{
      "time":2.0},
    "createSubSlicesAndLeadersInState":{
      "time":4411.0},
    "waitForSubSliceLeadersAlive":{
      "time":2.0},
    "splitParentCore":{
      "time":9005.0},
    "identifyNodesForReplicas":{
      "time":0.0},
    "createReplicaPlaceholders":{
      "time":2.0},
    "createCoresForReplicas":{
      "time":2105.0},
    "finalCommit":{
      "time":95.0}},
{code}
After that, sub-shards recovered in 443350 ms, so the total time was ca. 460 
sec.

> Improve SplitShardCmd performance and reliability
> -------------------------------------------------
>
>                 Key: SOLR-12509
>                 URL: https://issues.apache.org/jira/browse/SOLR-12509
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: SolrCloud
>            Reporter: Andrzej Bialecki 
>            Assignee: Andrzej Bialecki 
>            Priority: Major
>         Attachments: SOLR-12509.patch
>
>
> {{SplitShardCmd}} is currently quite complex.
> Shard splitting occurs on active shards, which are still being updated, so 
> the splitting has to involve several carefully orchestrated steps, making 
> sure that new sub-shard placeholders are properly created and visible, and 
> then also applying buffered updates to the split leaders and performing 
> recovery on sub-shard replicas.
> This process could be simplified in cases where collections are not actively 
> being updated or can tolerate a little downtime - we could put the shard 
> "offline", ie. disable writing while the splitting is in progress (in order 
> to avoid users' confusion we should disable writing to the whole collection).
> The actual index splitting could perhaps be improved to use 
> {{HardLinkCopyDirectoryWrapper}} for creating a copy of the index by 
> hard-linking existing index segments, and then applying deletes to the 
> documents that don't belong in a sub-shard. However, the resulting index 
> slices that replicas would have to pull would be the same size as the whole 
> shard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to