[ 
https://issues.apache.org/jira/browse/HDFS-245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12836973#action_12836973
 ] 

Sanjay Radia commented on HDFS-245:
-----------------------------------

Thanks for addressing the issues mentioned in the comments.
I caught a few things you changed since patch 31.
(you have already responded to some of these in my clarifying email questions - 
please document your responses in this jira.).
Most of the issues below can be addressed in separate jiras.

* Unecesssary changes in white space
   There are approx 10 accidental, unnecessary white spaces changes that you 
made. I will send you the list via email to keep this comment small.
 
* UnresolvedLinkException: - thanks for fixing them - you however missed a few 
- most of these can be fixed in a separate Jira.
** getLinkTarget in DFSClient and Hdfs
** the different variants of create in DFSClient - for consistency.
** ClientProtocol#saveNamespace - should not throw the exception (no path 
parameter)
** FSNamesystem#metaSave should convert the UnresolvedSymlink exception into 
another argument (IllegalArg?)
*** We don't allow one to save the meta data into a remote file system
*** However we could allow traversing of local symlinks - but current 
implementation processes these on client side. What is the best way to deal 
with local symlinks for this operation? (For now, we should just document that 
this operation does not for local symlinks.)
** You forgot to add the exceptions to FSNamesystem. Best done in a separate 
Jira.
** Lease#findPath thorws an UnresolvedException
*** this should not occur.
*** the method should catch it and log an error/assertion

* After catching isDir() incompatibility issue in HADOOP-6421, I realized that 
we should, in a *separate* jira,
   add a INode#isFile() and FSDirectory#isFile(src) and cleanup the calls to 
!isDir() and !isDirectory()
Please add these checks as part of that Jira:
** FSImage#loadFilesUnderConstruction
         if (old.isDirectory()) ...
          add an additional check for isSymlink() -- or better !isFile()
** FSNamesystem#startFileInternal
{code}
} else if (myFile,isDirectory() {
          ....
} else if (myFile.isSymlink() {
            assert(false, cannot reach here ) <-- add
}
{code}
* Minor code improvements for the quota changes you added since last patch:
** the ">-2" should be "== -1" just like you did "== -2" for symlink everywhere 
else
{code}
        // get quota only when the node is a directory
        long nsQuota = -1L;
        if (imgVersion <= -16 && blocks == null && numBlocks > -2) {
          nsQuota = in.readLong();
        }
        long dsQuota = -1L;
        if (imgVersion <= -18 && blocks == null && numBlocks > -2) {
          dsQuota = in.readLong();
        }
{code}
* Bug in verifyParentDir (a change from patch 31 I last reviewed)
  verifyParentDir calls
      INode[] pathINodes = dir.getExistingPathINodes(parent.toString(), false);
   Should the followLink be true?
   Add tests in your tests jira to cover this one
** Create, createSymlink, mkidr with arg createParent=false and a symlink in 
the parent path. 

> Create symbolic links in HDFS
> -----------------------------
>
>                 Key: HDFS-245
>                 URL: https://issues.apache.org/jira/browse/HDFS-245
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: dhruba borthakur
>            Assignee: Eli Collins
>         Attachments: 4044_20081030spi.java, design-doc-v4.txt, 
> designdocv1.txt, designdocv2.txt, designdocv3.txt, 
> HADOOP-4044-strawman.patch, symlink-0.20.0.patch, symlink-25-hdfs.patch, 
> symlink-26-hdfs.patch, symlink-26-hdfs.patch, symLink1.patch, symLink1.patch, 
> symLink11.patch, symLink12.patch, symLink13.patch, symLink14.patch, 
> symLink15.txt, symLink15.txt, symlink16-common.patch, symlink16-hdfs.patch, 
> symlink16-mr.patch, symlink17-common.txt, symlink17-hdfs.txt, 
> symlink18-common.txt, symlink19-common-delta.patch, symlink19-common.txt, 
> symlink19-common.txt, symlink19-hdfs-delta.patch, symlink19-hdfs.txt, 
> symlink20-common.patch, symlink20-hdfs.patch, symlink21-common.patch, 
> symlink21-hdfs.patch, symlink22-common.patch, symlink22-hdfs.patch, 
> symlink23-common.patch, symlink23-hdfs.patch, symlink24-hdfs.patch, 
> symlink27-hdfs.patch, symlink28-hdfs.patch, symlink29-hdfs.patch, 
> symlink29-hdfs.patch, symlink30-hdfs.patch, symlink31-hdfs.patch, 
> symlink33-hdfs.patch, symlink35-hdfs.patch, symlink36-hdfs.patch, 
> symlink37-hdfs.patch, symlink38-hdfs.patch, symlink39-hdfs.patch, 
> symLink4.patch, symlink40-hdfs.patch, symLink5.patch, symLink6.patch, 
> symLink8.patch, symLink9.patch
>
>
> HDFS should support symbolic links. A symbolic link is a special type of file 
> that contains a reference to another file or directory in the form of an 
> absolute or relative path and that affects pathname resolution. Programs 
> which read or write to files named by a symbolic link will behave as if 
> operating directly on the target file. However, archiving utilities can 
> handle symbolic links specially and manipulate them directly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to