[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16254535#comment-16254535
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12594:
--------------------------------------------

Thanks for the update.  The patch looks much better.
- rename all dirid to dirId and fileid to fileId in proto and other code since 
we are already using fileId and blockId in other places.
- SnapshotDiffReportGenerator.INODE_COMPARATOR can be simplified to
{code}
    @Override
    public int compare(DiffReportListingEntry left, DiffReportListingEntry 
right) {
      if (left == right) {
        return 0;
      } else if (left == null) {
        return -1;
      } else if (right == null) {
        return 1;
      }

      final Comparator<byte[]> cmp = SignedBytes.lexicographicalComparator();
      final byte[][] l = left.getSourcePath();
      final byte[][] r = right.getSourcePath();
      for (int i = 0; i < l.length && i < r.length; i++) {
        final int diff = cmp.compare(l[i], r[i]);
        if (diff != 0) {
          return diff;
        }
      }
      return l.length == r.length ? 0 : l.length > r.length ? 1 : -1;
    }
{code}
- DFSUtilClient.bytes2byteArray and DFSUtil.bytes2byteArray are very similar 
but there is a small difference when len == 0:
-* DFSUtilClient returns new byte\[0]\[] and
-* DFSUtil returns  new byte\[]\[]\{null}.
Is it a bug?
- We should remove DFSUtil.bytes2byteArray to reduce code duplication.
- Use Snapshot.getSnapshotId(..) instead of CURRENT_STATE_ID.

BTW, the patch has a lot of short lines so that it is hard to read.  E.g.
{code}
    if (node.isDirectory()) {
      final ChildrenDiff diff = new ChildrenDiff();
      INodeDirectory dir = node.asDirectory();
      if (processFlag) {
        DirectoryWithSnapshotFeature sf = dir.getDirectoryWithSnapshotFeature();
        if (sf != null) {
          boolean change =
              sf.computeDiffBetweenSnapshots(earlierSnapshot, laterSnapshot,
                  diff, dir);
          if (change) {
            processFlag =
                diffReport.addDirDiff(dir, relativePath, diff);
            if (!processFlag) {
              return false;
            }
          }
        }
      }
      boolean iterate = false;
      ReadOnlyList<INode> children =
          dir.getChildrenList(earlierSnapshot.getId());
      for (INode child : children) {
        if (!processFlag && !iterate && !Arrays
            .equals(resumePath[level], child.getLocalNameBytes())) {
          continue;
        }
        iterate = true;
        level = level + 1;
        final byte[] name = child.getLocalNameBytes();
        boolean toProcess = diff.searchIndex(ListType.DELETED, name) < 0;
        if (!toProcess && child instanceof INodeReference.WithName) {
          byte[][] renameTargetPath =
              findRenameTargetPath(snapshotDir, (WithName) child,
                  laterSnapshot == null ? Snapshot.CURRENT_STATE_ID :
                      laterSnapshot.getId());
          if (renameTargetPath != null) {
            toProcess = true;
          }
        }
        if (toProcess) {
          parentPath.add(name);
          processFlag = computeDiffRecursively(snapshotDir, child, parentPath,
              diffReport, resumePath, level, processFlag);
          parentPath.remove(parentPath.size() - 1);
          if (!processFlag) {
            return false;
          }
        }
      }
{code}
Could you reformat them?


> SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC 
> response limit
> -------------------------------------------------------------------------------------------
>
>                 Key: HDFS-12594
>                 URL: https://issues.apache.org/jira/browse/HDFS-12594
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>         Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, 
> HDFS-12594.006.patch, HDFS-12594.007.patch, SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> -------------------------------------------------------------------------------
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: "hw15685.local/10.200.5.230"; destination host 
> is: "localhost":59808;
> Attached is the proposal for the changes required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to