[ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17319191#comment-17319191
 ] 

Viraj Jasani commented on HADOOP-17611:
---------------------------------------

[~amaroti] yes, concat() does seem to be updating mtime of both destFile and 
it's parent dir. And concat() seems to be appending all source blocks to target 
file block. Hence, although it creates new INode internally copy all blocks 
from target file + array of source blocks, however the high level 
responsibility of the operation is to append all src blocks to target file only.

 
{code:java}
/**
 * Concat all the blocks from srcs to trg and delete the srcs files
 * @param fsd FSDirectory
 */
static void unprotectedConcat(FSDirectory fsd, INodesInPath targetIIP,
    INodeFile[] srcList, long timestamp) throws IOException {
  assert fsd.hasWriteLock();
  NameNode.stateChangeLog.debug("DIR* NameSystem.concat to {}",
      targetIIP.getPath());

  final INodeFile trgInode = targetIIP.getLastINode().asFile();
  QuotaCounts deltas = computeQuotaDeltas(fsd, trgInode, srcList);
  verifyQuota(fsd, targetIIP, deltas);

  // the target file can be included in a snapshot
  trgInode.recordModification(targetIIP.getLatestSnapshotId());
  INodeDirectory trgParent = targetIIP.getINode(-2).asDirectory();
  trgInode.concatBlocks(srcList, fsd.getBlockManager());

  // since we are in the same dir - we can use same parent to remove files
  int count = 0;
  for (INodeFile nodeToRemove : srcList) {
    if(nodeToRemove != null) {
      nodeToRemove.clearBlocks();
      // Ensure the nodeToRemove is cleared from snapshot diff list
      nodeToRemove.getParent().removeChild(nodeToRemove,
          targetIIP.getLatestSnapshotId());
      fsd.getINodeMap().remove(nodeToRemove);
      count++;
    }
  }

  trgInode.setModificationTime(timestamp, targetIIP.getLatestSnapshotId());
  trgParent.updateModificationTime(timestamp, targetIIP.getLatestSnapshotId());
  // update quota on the parent directory with deltas
  FSDirectory.unprotectedUpdateCount(targetIIP, targetIIP.length() - 1, deltas);
}

{code}
In above code, this is updating mtime of target file INode as well as that of 
it's parent dir:
{code:java}
trgInode.setModificationTime(timestamp, targetIIP.getLatestSnapshotId()); 
trgParent.updateModificationTime(timestamp, targetIIP.getLatestSnapshotId()); 
{code}
 
{quote}Is that supposed to be preserved by distcp?
{quote}
Good question, perhaps [~ayushtkn] [~weichiu] can help with this.

> Distcp parallel file copy breaks the modification time
> ------------------------------------------------------
>
>                 Key: HADOOP-17611
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17611
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Adam Maroti
>            Assignee: Viraj Jasani
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to