tolbertam commented on code in PR #4558:
URL: https://github.com/apache/cassandra/pull/4558#discussion_r2836644247


##########
doc/modules/cassandra/pages/managing/operating/auto_repair.adoc:
##########
@@ -0,0 +1,460 @@
+= Auto Repair
+:navtitle: Auto Repair
+:description: Auto Repair concepts - How it works, how to configure it, and 
more.
+:keywords: CEP-37, Repair, Incremental, Preview
+
+Auto Repair is a fully automated scheduler that provides repair orchestration 
within Apache Cassandra. This
+significantly reduces operational overhead by eliminating the need for 
operators to deploy external tools to submit and
+manage repairs.
+
+At a high level, a dedicated thread pool is assigned to the repair scheduler. 
The repair scheduler in Cassandra
+maintains a new replicated table, `system_distributed.auto_repair_history`, 
which stores the repair history for all
+nodes, including details such as the last repair time. The scheduler selects 
the node(s) to begin repairs and
+orchestrates the process to ensure that every table and its token ranges are 
repaired.
+
+The algorithm can run repairs simultaneously on multiple nodes and splits 
token ranges into subranges, with necessary
+retries to handle transient failures. Automatic repair starts as soon as a 
Cassandra cluster is launched, similar to
+compaction, and if configured appropriately, does not require human 
intervention.
+
+The scheduler currently supports Full, Incremental, and Preview repair types 
with the following features. New repair
+types, such as Paxos repair or other future repair mechanisms, can be 
integrated with minimal development effort!
+
+
+== Features
+- Capability to run repairs on multiple nodes simultaneously.
+- A default implementation and an interface to override the dataset being 
repaired per session.
+- Extendable token split algorithms with two implementations readily available:
+.  Splits token ranges by placing a cap on the size of data repaired in one 
session and a maximum cap at the schedule
+level using xref:#repair-token-range-splitter[RepairTokenRangeSplitter] 
(default).
+.  Splits tokens evenly based on the specified number of splits using
+xref:#fixed-split-token-range-splitter[FixedSplitTokenRangeSplitter].
+- A new xref:#table-configuration[CQL table property] (`auto_repair`) offering:
+.  The ability to disable specific repair types at the table level, allowing 
the scheduler to skip one or more tables.
+.  Configuring repair priorities for certain tables to prioritize them over 
others.
+- Dynamic enablement or disablement of the scheduler for each repair type.
+- Configurable settings tailored to each repair job.
+- Rich configuration options for each repair type (e.g., Full, Incremental, or 
Preview repairs).
+- Comprehensive observability features that allow operators to configure 
alarms as needed.
+
+== Considerations
+
+Before enabling Auto Repair, please consult the 
xref:managing/operating/repair.adoc[Repair] guide to establish a base
+understanding of repairs.
+
+=== Full Repair
+
+Full Repairs operate over all data in the token range being repaired.  It is 
therefore important to run full repair
+with a longer schedule and with smaller assignments.
+
+=== Incremental Repair
+
+When enabled from the inception of a cluster, incremental repairs operate over 
unrepaired data and should finish
+quickly when run more frequently.
+
+Once incremental repair has been run, SSTables will be separated between data 
that have been incrementally repaired
+and data that have not.  Therefore, it is important to continually run 
incremental repair once it has been enabled so
+newly written data can be compacted together with previously repaired data, 
allowing overwritten and expired data to
+be eventually purged.
+
+Running incremental repair more frequently keeps the unrepaired set smaller 
and thus causes repairs to operate over
+a smaller set of data, so a shorter `min_repair_interval` such as `1h` is 
recommended for new clusters.
+
+==== Enabling Incremental Repair on existing clusters with a large amount of 
data
+[#enabling-ir]
+One should be careful when enabling incremental repair on a cluster for the 
first time. While
+xref:#repair-token-range-splitter[RepairTokenRangeSplitter] includes a default 
configuration to attempt to gracefully
+migrate to incremental repair over time, failure to take proper precaution 
could overwhelm the cluster with
+xref:managing/operating/compaction/overview.adoc#types-of-compaction[anticompactions].
+
+No matter how one goes about enabling and running incremental repair, it is 
recommended to run a cycle of full repairs
+for the entire cluster as pre-flight step to running incremental repair. This 
will put the cluster into a more
+consistent state which will reduce the amount of streaming between replicas 
when incremental repair initially runs.
+
+If you do not have strong data consistency requirements, one may consider using
+xref:managing/tools/sstable/sstablerepairedset.adoc[nodetool 
sstablerepairedset] to mark all SSTables as repaired
+before enabling incremental repair scheduling using Auto Repair. This will 
reduce the burden of initially running
+incremental repair because all existing data will be considered as repaired, 
so subsequent incremental repairs will
+only run against new data.
+
+If you do have strong data consistency requirements, then one must treat all 
data as initially unrepaired and run
+incremental repair against it.  Consult
+xref:#incremental-repair-defaults[RepairTokenRangeSplitter's Incremental 
repair defaults].
+
+In particular one should be mindful of the 
xref:managing/operating/compaction/overview.adoc[compaction strategy]
+you use for your tables and how it might impact incremental repair before 
running incremental repair for the first
+time:
+
+- *Large SSTables*: When using 
xref:managing/operating/compaction/stcs.adoc[SizeTieredCompactionStrategy] or 
any
+  compaction strategy which can create large SSTables including many 
partitions the amount of
+  
xref:managing/operating/compaction/overview.adoc#types-of-compaction[anticompaction]
 that might be required could be
+  excessive. Using a small `bytes_per_assignment` might contribute to repeated 
anticompactions over the same
+  unrepaired data.
+- *Partitions overlapping many SSTables*: If partitions overlap between many 
SSTables, the amount of SSTables included
+  in a repair might be large.  Therefore it is important to consider that many 
SSTables may be included in a repair
+  session and must all be anticompacted. 
xref:managing/operating/compaction/lcs.adoc[LeveledCompactionStrategy] is less
+  susceptible to this issue as it prevents overlapping of partitions within 
levels outside of L0, but if SSTables
+  start accumulating in L0 between incremental repairs, the cost of 
anticompaction will increase.
+  xref:managing/operating/compaction/ucs#sharding[UnifiedCompactionStrategy's 
sharding] can also be used to avoid
+  partitions overlapping SSTables.
+
+The xref:#repair-token-range-splitter[token_range_splitter] configuration for 
incremental repair includes a default
+configuration that attempts to conservatively migrate 100GiB of compressed 
data every day per node. Depending on
+requirements, data set and capability of a cluster's hardware, one may 
consider tuning these values to be more
+aggressive or conservative.
+
+=== Previewing Repaired Data
+
+The `preview_repaired` repair type executes repairs over the repaired data set 
to detect possible data inconsistencies.
+
+Inconsistencies in the repaired data set should not happen in practice and 
could indicate a possible bug in incremental
+repair.
+
+Running preview repairs is useful when considering using the
+xref:cassandra:managing/operating/compaction/tombstones.adoc#deletion[only_purge_repaired_tombstones]
 table compaction
+option to prevent data from possibly being resurrected when inconsistent 
replicas are missing tombstones from deletes.
+
+When enabled, the `BytesPreviewedDesynchronized` and 
`TokenRangesPreviewedDesynchronized`
+xref:cassandra:managing/operating/metrics.adoc#table-metrics[table metrics] 
can be used to detect inconsistencies in the
+repaired data set.
+
+== Configuring Auto Repair in cassandra.yaml
+
+Configuration for Auto Repair is managed in the `cassandra.yaml` file by the 
`auto_repair` property.

Review Comment:
   We should also add a paragraph mentioning something like:
   
   Auto Repair was originally introduced in Cassandra 6.0, but back ported to 
Cassandra 5.0.7.  To enable the feature in Cassandra 5.0, one must first ensure 
every node in the cluster is upgraded to 5.0.7 and the system property 
`-Dcassandra.autorepair.enable=true` must be added to your JVM arguments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to