virajjasani commented on code in PR #4264:
URL: https://github.com/apache/hadoop/pull/4264#discussion_r866455501


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:
##########
@@ -1915,12 +1919,57 @@ public int reconfig(String[] argv, int i) throws 
IOException {
   }
 
   int startReconfiguration(final String nodeThpe, final String address)
-      throws IOException {
-    return startReconfiguration(nodeThpe, address, System.out, System.err);
+      throws IOException, InterruptedException {
+    return startReconfigurationUtil(nodeThpe, address, System.out, System.err);
+  }
+
+  int startReconfigurationUtil(final String nodeType, final String address, 
final PrintStream out,
+      final PrintStream err) throws IOException, InterruptedException {
+    if (!"livenodes".equals(address)) {
+      return startReconfiguration(nodeType, address, out, err);
+    }
+    if (!"datanode".equals(nodeType)) {
+      err.println("Only datanode type supports reconfiguration in bulk.");
+      return 1;
+    }
+    ExecutorService executorService = Executors.newFixedThreadPool(5);

Review Comment:
   I thought of doing the same initially, but then I realized that allowing 
users to provide threadpool size might be bit complicated for admin/user (i.e. 
bit more headache for user) as they would have to learn about where this size 
is going to be used internally, hence I kept it as 5 as it seemed reasonable 
for even bigger clusters. Does this sound good?
   If you have strong preference, will be happy to do it :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to