[ 
https://issues.apache.org/jira/browse/CASSANDRA-20059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17896308#comment-17896308
 ] 

Sam Tunnicliffe commented on CASSANDRA-20059:
---------------------------------------------

{{retryIndefinitely}} is used exactly once in trunk, in {{Processor::commit}} 
(not specifically {{{}RemoteProcessor{}}}). It's used only under very specific 
circumstances, for the reason noted  in the comment:
{code:java}
        // When the cluster is bounced, it may happen that regular nodes come 
up earlier than CMS nodes, or CMS
        // nodes come up and fail to finish the startup since other CMS nodes 
are not up yet, and therefore can not
        // submit the STARTUP message. This allows the bounces affecting 
majority of CMS nodes to finish successfully.
        if (transform.kind() == Transformation.Kind.STARTUP)
        {
            return commit(entryId, transform, lastKnown,
                          
Retry.Deadline.retryIndefinitely(DatabaseDescriptor.getCmsAwaitTimeout().to(TimeUnit.NANOSECONDS),
                                                           
TCMMetrics.instance.commitRetries));
        } 
{code}

{quote}The issue here is that the networking retry has no clue that we gave up 
waiting on the request
{quote}
I may be missing something and you didn't show where the accord example gets 
its {{retryPolicy}} from, but if the calling code is going to give up waiting 
on the request why use {{retryIndefinitely}} instead of using an actual 
deadline from {{at}} or {{after}} in {{Retry.Deadline}}?

> TCM's Retry.Deadline#retryIndefinitely is dangerous if used with 
> RemoteProcessor as the deadline does not impact message retries
> --------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-20059
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-20059
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Transactional Cluster Metadata
>            Reporter: David Capwell
>            Priority: Normal
>             Fix For: 5.x
>
>
> {code}
> public static Deadline retryIndefinitely(long timeoutNanos, Meter retryMeter)
> {
>     return new Deadline(Clock.Global.nanoTime() + timeoutNanos,
>                         new Retry.Jitter(Integer.MAX_VALUE, 
> DEFAULT_BACKOFF_MS, new Random(), retryMeter))
>     {
>         @Override
>         public boolean reachedMax()
>         {
>             return false;
>         }
>         @Override
>         public long remainingNanos()
>         {
>             return timeoutNanos;
>         }
>         public String toString()
>         {
>             return String.format("RetryIndefinitely{tries=%d}", 
> currentTries());
>         }
>     };
> }
> {code}
> Sample usage pattern (example is in Accord, but same pattern exists in 
> RemoteProcessor.commit)
> {code}
> Promise<LogState> request = new AsyncPromise<>();
> List<InetAddressAndPort> candidates = new 
> ArrayList<>(log.metadata().fullCMSMembers());
> sendWithCallbackAsync(request,
>                       Verb.TCM_RECONSTRUCT_EPOCH_REQ,
>                       new ReconstructLogState(lowEpoch, highEpoch, 
> includeSnapshot),
>                       new CandidateIterator(candidates),
>                       retryPolicy);
> return request.get(retryPolicy.remainingNanos(), TimeUnit.NANOSECONDS);
> {code}
> The issue here is that the networking retry has no clue that we gave up 
> waiting on the request, so we will keep retrying until success!  The reason 
> for this is “reachedMax” is used to see if its safe to run again, but it 
> isn’t as the deadline has passed!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to