[ 
https://issues.apache.org/jira/browse/HDFS-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13706684#comment-13706684
 ] 

Jing Zhao commented on HDFS-4912:
---------------------------------

The patch looks good to me. Only some minor:

1. In the javadoc of startFileInternal,
{code}
    * 
-   * @return the last block locations if the block is partial or null otherwise
+   * Note that in this method
+   * 
+   * For description of parameters and exceptions thrown see
{code}

Looks like some content is missing here?

2. In appendFileInternal,
{code}
+    final INodeFile myFile = INodeFile.valueOf(inode, src, true);
+    if (isPermissionEnabled) {
+      checkPathAccess(pc, src, FsAction.WRITE);
+    }
+
+    try {
+      if (myFile == null) {
+        throw new FileNotFoundException("failed to append to non-existent file 
"
+          + src + " on client " + clientMachine);
+      }
{code}

For append, maybe we can check if inode is null before we convert it to an 
INodeFile? I.e., we may add a null check before INodeFile.valueOf call. 

Besides of these two minors, +1 for the patch.
                
> Cleanup FSNamesystem#startFileInternal
> --------------------------------------
>
>                 Key: HDFS-4912
>                 URL: https://issues.apache.org/jira/browse/HDFS-4912
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Suresh Srinivas
>            Assignee: Suresh Srinivas
>         Attachments: HDFS-4912.1.patch, HDFS-4912.patch
>
>
> FSNamesystem#startFileInternal is used by both create and append. This 
> results in ugly if else conditions to consider append/create scenarios. This 
> method can be refactored and the code can be cleaned up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to