[ 
https://issues.apache.org/jira/browse/IGNITE-27524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-27524:
-----------------------------------------
    Component/s: placement driver ai3

> Improve logging and timeout management of waitForActualState
> ------------------------------------------------------------
>
>                 Key: IGNITE-27524
>                 URL: https://issues.apache.org/jira/browse/IGNITE-27524
>             Project: Ignite
>          Issue Type: Improvement
>          Components: placement driver ai3
>            Reporter: Anton Laletin
>            Assignee: Anton Laletin
>            Priority: Major
>              Labels: ignite-3
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> InĀ PlacementDriverMessageProcessor there is waitForActualState which reads 
> index from raft and then wait until storage index reaches raft one.
> The proposal is to improve logging to differentiate timeout scenarios, i.e. 
> in case of failure it should be clear what what was the cause raft index 
> retrieval, storage index tracking etc.
> Moreover the following code applies 2 similar timeouts to the raft index 
> read, meanwhile storage tracking doesn't have one.
> {code:java}
> retryOperationUntilSuccess(raftClient::readIndex, e -> currentTimeMillis() > 
> expirationTime, executor)
>         .orTimeout(timeout, TimeUnit.MILLISECONDS)
>         .thenCompose(storageIndexTracker::waitFor); 
> {code}
> I propose to add timeout for storage tracking and remove extra one for raft 
> index
> {code:java}
> retryOperationUntilSuccess(raftClient::readIndex, e -> currentTimeMillis() > 
> expirationTime, executor)
>         .thenCompose(storageIndexTracker::waitFor)
>         .orTimeout(timeout, TimeUnit.MILLISECONDS);
> {code}
> Plus, it seems it doesn't make sense to wait more than lease lifespan, so 
> let's calculate remaining time after raft index retrieval and use this time 
> for timeout.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to