aasha commented on a change in pull request #1648:
URL: https://github.com/apache/hive/pull/1648#discussion_r520328435



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/parse/repl/CopyUtils.java
##########
@@ -82,27 +87,31 @@ public void copyAndVerify(Path destRoot, 
List<ReplChangeManager.FileInfo> srcFil
     }
     FileSystem sourceFs = srcFiles.get(0).getSrcFs();
     boolean useRegularCopy = regularCopy(sourceFs, srcFiles);
+    ExecutorService executorService = null;
     try {
       if (useRegularCopy || readSrcAsFilesList) {
+        executorService = Executors.newFixedThreadPool(maxParallelCopyTask);

Review comment:
       Distcp supports running distributed copy for blocks of the same file in 
parallel. Customers can use this option.
   
   -blocksperchunk <blocksperchunk> | Number of blocks per chunk. When 
specified, split files into chunks to copy in parallel | If set to a positive 
value, files with more blocks than this value will be split into chunks of 
<blocksperchunk> blocks to be transferred in parallel, and reassembled on the 
destination. By default, <blocksperchunk> is 0 and the files will be 
transmitted in their entirety without splitting. This switch is only applicable 
when the source file system implements getBlockLocations method and the target 
file system implements concat method.
   -- | -- | --
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to