[ https://issues.apache.org/jira/browse/HDFS-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17686119#comment-17686119 ]
ASF GitHub Bot commented on HDFS-16890: --------------------------------------- omalley commented on code in PR #5298: URL: https://github.com/apache/hadoop/pull/5298#discussion_r1100744979 ########## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java: ########## @@ -1730,4 +1755,30 @@ private static boolean isReadCall(Method method) { } return !method.getAnnotationsByType(ReadOnly.class)[0].activeOnly(); } + + /** + * Checks and sets last refresh time for a namespace's stateId. + * Returns true if refresh time is newer than threshold. + * Otherwise, return false and call should be handled by active namenode. + * @param nsId namespaceID + */ + @VisibleForTesting + boolean isNamespaceStateIdFresh(String nsId) { + if (activeNNStateIdRefreshPeriodMs < 0) { + return true; + } + + long currentTimeMs = Time.monotonicNow(); + LongAccumulator latestRefreshTimeMs = lastActiveNNRefreshTimes + .computeIfAbsent(nsId, key -> new LongAccumulator(Math::max, 0)); + + return ((currentTimeMs - latestRefreshTimeMs.get()) <= activeNNStateIdRefreshPeriodMs); + } + + private void refreshTimeOfLastCallToActiveNameNode(String namespaceId) { + LongAccumulator latestRefreshTimeMs = lastActiveNNRefreshTimes Review Comment: I'd suggest putting this common code into a method like: LongAccumulator getLastCallToActive(String nsId) { ... } Then the other lines could mostly be inline: return currentTimeMs - getLastCallToActive(nsId).get() <= activeNNStateIdRefreshPeriodMs; and getLastCallToActive(nsId).accumulate(Time.monotonicNow()); ########## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java: ########## @@ -1702,10 +1724,12 @@ private List<? extends FederationNamenodeContext> getOrderedNamenodes(String nsI boolean isObserverRead) throws IOException { final List<? extends FederationNamenodeContext> namenodes; - if (RouterStateIdContext.getClientStateIdFromCurrentCall(nsId) > Long.MIN_VALUE) { + if (isNamespaceStateIdFresh(nsId) + && (RouterStateIdContext.getClientStateIdFromCurrentCall(nsId) > Long.MIN_VALUE)) { namenodes = namenodeResolver.getNamenodesForNameserviceId(nsId, isObserverRead); } else { namenodes = namenodeResolver.getNamenodesForNameserviceId(nsId, false); + refreshTimeOfLastCallToActiveNameNode(nsId); Review Comment: Shouldn't this be updated on whether we went to the active? In other words, if isObserverRead is false, this should be updated also. > RBF: Add period state refresh to keep router state near active namenode's > ------------------------------------------------------------------------- > > Key: HDFS-16890 > URL: https://issues.apache.org/jira/browse/HDFS-16890 > Project: Hadoop HDFS > Issue Type: Task > Reporter: Simbarashe Dzinamarira > Assignee: Simbarashe Dzinamarira > Priority: Major > Labels: pull-request-available > > When using the ObserverReadProxyProvider, clients can set > *dfs.client.failover.observer.auto-msync-period...* to periodically get the > Active namenode's state. When using routers without the > ObserverReadProxyProvider, this periodic update is lost. > In a busy cluster, the Router constantly gets updated with the active > namenode's state when > # There is a write operation. > # There is an operation (read/write) from a new clients. > However, in the scenario when there are no new clients and no write > operations, the state kept in the router can lag behind the active's. The > router does update its state with responses from the Observer, but the > observer may be lagging behind too. > We should have a periodic refresh in the router to serve a similar role as > *dfs.client.failover.observer.auto-msync-period* -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org