bdeggleston commented on code in PR #4106:
URL: https://github.com/apache/cassandra/pull/4106#discussion_r2070876300
##########
src/java/org/apache/cassandra/replication/Shard.java:
##########
@@ -94,6 +109,42 @@ void addSummaryForRange(AbstractBounds<PartitionPosition>
range, boolean include
});
}
+ List<InetAddressAndPort> remoteReplicas()
+ {
+ List<InetAddressAndPort> replicas = new
ArrayList<>(participants.size() - 1);
+ for (int i = 0, size = participants.size(); i < size; ++i)
+ {
+ int hostId = participants.get(i);
+ if (hostId != localHostId)
+ replicas.add(ClusterMetadata.current().directory.endpoint(new
NodeId(hostId)));
+ }
+ return replicas;
+ }
+
+ /**
+ * Collects replicated offsets for the logs owned by this coordinator on
this shard.
+ */
+ ShardReplicatedOffsets collectReplicatedOffsets()
+ {
+ Long2ObjectHashMap<LogReplicatedOffsets> offsets = new
Long2ObjectHashMap<>();
+ for (CoordinatorLogPrimary log : primaryLogs())
Review Comment:
We discussed 1:1 but I’m summarizing here for posterity:
I agree with everything you said here about the ideal solution. What I’m not
sure about is whether the benefit yielded by the higher frequency will be worth
the additional development effort. I think starting simple and adding
optimizations / complexity incrementally as perf testing justifies it and/or
time allows is a better way to go.
Also, this patch currently only results in fully reconciled ids if the
original coordinator observes delivery of the initial mutation. Replicas won’t
learn of mutations witnessed as part of read reconciliation unless the receiver
is also the original coordinator.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]