goiri commented on code in PR #4543:
URL: https://github.com/apache/hadoop/pull/4543#discussion_r922880359


##########
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java:
##########
@@ -1366,4 +1393,42 @@ public void shutdown() {
       threadpool.shutdown();
     }
   }
+
+  private <R> Map<SubClusterInfo, R> 
invokeConcurrent(Collection<SubClusterInfo> clusterIds,
+      ClientMethod request, Class<R> clazz) {
+
+    Map<SubClusterInfo, R> results = new HashMap<>();
+
+    // Send the requests in parallel
+    CompletionService<R> compSvc = new 
ExecutorCompletionService<>(this.threadpool);
+
+    for (final SubClusterInfo info : clusterIds) {
+      compSvc.submit(() -> {
+        DefaultRequestInterceptorREST interceptor = 
getOrCreateInterceptorForSubCluster(
+            info.getSubClusterId(), info.getRMWebServiceAddress());
+        try {
+          Method method = DefaultRequestInterceptorREST.class.
+              getMethod(request.getMethodName(), request.getTypes());
+          return (R) clazz.cast(method.invoke(interceptor, 
request.getParams()));

Review Comment:
   I was referring to something like:
   ```
   Object retObj = method.invoke(interceptor, request.getParams());
   R ret = clazz.cast(retObj);
   return ret;
   ```



##########
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockDefaultRequestInterceptorREST.java:
##########
@@ -231,4 +240,42 @@ public boolean isRunning() {
   public void setRunning(boolean runningMode) {
     this.isRunning = runningMode;
   }
+
+  @Override
+  public ContainersInfo getContainers(HttpServletRequest req, 
HttpServletResponse res,
+      String appId, String appAttemptId) {
+    if (!isRunning) {
+      throw new RuntimeException("RM is stopped");
+    }
+
+    // We avoid to check if the Application exists in the system because we 
need
+    // to validate that each subCluster returns 1 container.
+    ContainersInfo containers = new ContainersInfo();
+
+    int subClusterId = Integer.valueOf(getSubClusterId().getId());
+
+    ContainerId containerId = ContainerId.newContainerId(
+        ApplicationAttemptId.fromString(appAttemptId), subClusterId);
+    Resource allocatedResource =
+        Resource.newInstance(subClusterId, subClusterId);
+
+    NodeId assignedNode = NodeId.newInstance("Node", subClusterId);
+    Priority priority = Priority.newInstance(subClusterId);
+    long creationTime = subClusterId;
+    long finishTime = subClusterId;
+    String diagnosticInfo = "Diagnostic " + subClusterId;
+    String logUrl = "Log " + subClusterId;
+    int containerExitStatus = subClusterId;
+    ContainerState containerState = ContainerState.COMPLETE;
+    String nodeHttpAddress = "HttpAddress " + subClusterId;
+
+    ContainerReport containerReport = ContainerReport.newInstance(containerId, 
allocatedResource,
+        assignedNode, priority, creationTime, finishTime, diagnosticInfo, 
logUrl,
+            containerExitStatus, containerState, nodeHttpAddress);

Review Comment:
   Bad indentation



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to