[ 
https://issues.apache.org/jira/browse/HADOOP-17531?focusedWorklogId=559854&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-559854
 ]

ASF GitHub Bot logged work on HADOOP-17531:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 02/Mar/21 08:41
            Start Date: 02/Mar/21 08:41
    Worklog Time Spent: 10m 
      Work Description: ayushtkn commented on a change in pull request #2732:
URL: https://github.com/apache/hadoop/pull/2732#discussion_r585357165



##########
File path: hadoop-tools/hadoop-distcp/src/site/markdown/DistCp.md.vm
##########
@@ -362,6 +362,7 @@ Command Line Options
 | `-copybuffersize <copybuffersize>` | Size of the copy buffer to use. By 
default, `<copybuffersize>` is set to 8192B | |
 | `-xtrack <path>` | Save information about missing source files to the 
specified path. | This option is only valid with `-update` option. This is an 
experimental property and it cannot be used with `-atomic` option. |
 | `-direct` | Write directly to destination paths | Useful for avoiding 
potentially very expensive temporary file rename operations when the 
destination is an object store |
+| `-useIterator` | Uses single threaded listStatusIterator to build listing | 
Useful for saving memory at the client side. |

Review comment:
       Thanx @jojochuang for having a look.
   Yes, It indeed isn't meant for object stores, I am trying a multi threaded 
approach for object stores too as part of HADOOP-17558, that won't be too much 
memory efficient, but still find a balance between speed and memory. I have a 
WIP patch for that as well, will share that on the jira
   
   This is basically for HDFS or FS where listing is not slow, but there are 
memory constraints, my scenario is basically for DR, where it is in general 
HDFS->HDFS or HDFS->S3




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 559854)
    Time Spent: 40m  (was: 0.5h)

> DistCp: Reduce memory usage on copying huge directories
> -------------------------------------------------------
>
>                 Key: HADOOP-17531
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17531
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Ayush Saxena
>            Priority: Critical
>              Labels: pull-request-available
>         Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Presently distCp, uses the producer-consumer kind of setup while building the 
> listing, the input queue and output queue are both unbounded, thus the 
> listStatus grows quite huge.
> Rel Code Part :
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635
> This goes on bredth-first traversal kind of stuff(uses queue instead of 
> earlier stack), so if you have files at lower depth, it will like open up the 
> entire tree and the start processing....



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to