mkuchenbecker commented on code in PR #5142:
URL: https://github.com/apache/hadoop/pull/5142#discussion_r1028372645
##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java:
##
@@ -439,4 +440,60 @@ public void testRouterMsync() throws Exception {
assertEquals("Four calls should be sent to active", 4,
rpcCountForActive);
}
+
+ @Test
+ public void testSingleRead() throws Exception {
+List namenodes = routerContext
+.getRouter().getNamenodeResolver()
+.getNamenodesForNameserviceId(cluster.getNameservices().get(0), true);
+assertEquals("First namenode should be observer",
namenodes.get(0).getState(),
+FederationNamenodeServiceState.OBSERVER);
+Path path = new Path("/");
+
+long rpcCountForActive;
+long rpcCountForObserver;
+
+// Send read request
+fileSystem.listFiles(path, false);
+fileSystem.close();
+
+rpcCountForActive = routerContext.getRouter().getRpcServer()
+.getRPCMetrics().getActiveProxyOps();
+// getListingCall sent to active.
+assertEquals("Only one call should be sent to active", 1,
rpcCountForActive);
+
+rpcCountForObserver = routerContext.getRouter().getRpcServer()
+.getRPCMetrics().getObserverProxyOps();
+// getList call should be sent to observer
+assertEquals("No calls should be sent to observer", 0,
rpcCountForObserver);
+ }
+
+ @Test
+ public void testSingleReadUsingObserverReadProxyProvider() throws Exception {
+fileSystem.close();
+fileSystem = routerContext.getFileSystemWithObserverReadProxyProvider();
Review Comment:
This seems wrong to special-case in this way. Either manage it during setup
or set it for every function, but I'd advise against mixing the two.
##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java:
##
@@ -122,7 +123,9 @@ public void startUpCluster(int numberOfObserver,
Configuration confOverrides) th
cluster.waitActiveNamespaces();
routerContext = cluster.getRandomRouter();
-fileSystem = routerContext.getFileSystemWithObserverReadsEnabled();
+Configuration confToEnableObserverRead = new Configuration();
+
confToEnableObserverRead.setBoolean(HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE,
true);
+fileSystem = routerContext.getFileSystem(confToEnableObserverRead);
Review Comment:
We are losing coverage on `getFileSystemWithObserverReadsEnabled` with this
change; we should likely be testing both as they are both valid use-cases
whether you want to msync or not.
##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java:
##
@@ -349,6 +349,13 @@ public static ClientProtocol
createProxyWithAlignmentContext(
boolean withRetries, AtomicBoolean fallbackToSimpleAuth,
AlignmentContext alignmentContext)
throws IOException {
+if (conf.getBoolean(HdfsClientConfigKeys.DFS_RBF_OBSERVER_READ_ENABLE,
Review Comment:
What was the original behaviour where someone passed in `null` to this
function?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org
-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org