[jira] [Updated] (HDFS-4773) Fix bugs in quota usage updating/computation

2013-04-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4773:


Attachment: HDFS-4773.001.patch

Initial bug fix patch.

 Fix bugs in quota usage updating/computation
 

 Key: HDFS-4773
 URL: https://issues.apache.org/jira/browse/HDFS-4773
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HDFS-4773.001.patch


 1. FileWithSnapshot#updateQuotaAndCollectBlocks did not consider the scenario 
 that all the snapshots has been deleted from a snapshot copy of deleted file. 
 This may lead to a divide-by-0 error.
 2. When computing the quota usage for a WithName node and its subtree, if the 
 snapshot associated with the WithName node at the time of rename operation 
 has been deleted, we should compute the quota based on the posterior snapshot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4712) New libhdfs method hdfsGetDataNodes

2013-04-29 Thread andrea manzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644406#comment-13644406
 ] 

andrea manzi commented on HDFS-4712:


Hi Colin,
thanks a lot for your comments and for the tips they were really useful and now 
my code looks better. ( i hope:-)) 
2 things:
1) i have kept the signature with numEntries in the hdfsFreeDataNodeInfo 
method, cause i aligned the method to hdfsFreeFileInfo. But i can also remove 
the parameter and use a null pointer as you suggested
2) i added a method (enum_to_string) to convert the enum value to a string, 
cause in the generation of the enumtype trough JNI to pass as parameter to 
getDataNodeStats, i need the string associated to the enum value. 

i have also created a patch as described in the wiki and attached to the ticket

thanks again
Andrea


 New libhdfs method hdfsGetDataNodes
 ---

 Key: HDFS-4712
 URL: https://issues.apache.org/jira/browse/HDFS-4712
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: libhdfs
Reporter: andrea manzi
 Attachments: HDFS-4712.patch, hdfs.c.diff, hdfs.h.diff


 we have implemented a possible extension to libhdfs to retrieve information 
 about the available datanodes ( there was a mail on the hadoop-hdsf-dev 
 mailing list initially abut this :
 http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201204.mbox/%3CCANhO-
 s0mvororrxpjnjbql6brkj4c7l+u816xkdc+2r0whj...@mail.gmail.com%3E)
 i would like to know how to proceed to create a patch, cause on the wiki 
 http://wiki.apache.org/hadoop/HowToContribute i can see info about JAVA 
 patches but nothing related to extensions in C.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4712) New libhdfs method hdfsGetDataNodes

2013-04-29 Thread andrea manzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

andrea manzi updated HDFS-4712:
---

Attachment: HDFS-4712.patch

 New libhdfs method hdfsGetDataNodes
 ---

 Key: HDFS-4712
 URL: https://issues.apache.org/jira/browse/HDFS-4712
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: libhdfs
Reporter: andrea manzi
 Attachments: HDFS-4712.patch, hdfs.c.diff, hdfs.h.diff


 we have implemented a possible extension to libhdfs to retrieve information 
 about the available datanodes ( there was a mail on the hadoop-hdsf-dev 
 mailing list initially abut this :
 http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201204.mbox/%3CCANhO-
 s0mvororrxpjnjbql6brkj4c7l+u816xkdc+2r0whj...@mail.gmail.com%3E)
 i would like to know how to proceed to create a patch, cause on the wiki 
 http://wiki.apache.org/hadoop/HowToContribute i can see info about JAVA 
 patches but nothing related to extensions in C.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4734) HDFS Tests that use ShellCommandFencer are broken on Windows

2013-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644410#comment-13644410
 ] 

Hudson commented on HDFS-4734:
--

Integrated in Hadoop-Yarn-trunk #198 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/198/])
HDFS-4734. HDFS Tests that use ShellCommandFencer are broken on Windows. 
Contributed by Arpit Agarwal. (Revision 1476877)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476877
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java


 HDFS Tests that use ShellCommandFencer are broken on Windows
 

 Key: HDFS-4734
 URL: https://issues.apache.org/jira/browse/HDFS-4734
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4734.001.patch, HDFS-4734.002.patch, 
 HDFS-4734.003.patch


 The following tests use the POSIX true/false commands which are not available 
 on Windows.
 # TestDFSHAAdmin
 # TestDFSHAAdminMiniCluster
 # TestNodeFencer
 Additionally, ShellCommandFencer has a hard-coded dependency on bash (also 
 documented at 
 https://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4734) HDFS Tests that use ShellCommandFencer are broken on Windows

2013-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644464#comment-13644464
 ] 

Hudson commented on HDFS-4734:
--

Integrated in Hadoop-Hdfs-trunk #1387 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1387/])
HDFS-4734. HDFS Tests that use ShellCommandFencer are broken on Windows. 
Contributed by Arpit Agarwal. (Revision 1476877)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476877
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java


 HDFS Tests that use ShellCommandFencer are broken on Windows
 

 Key: HDFS-4734
 URL: https://issues.apache.org/jira/browse/HDFS-4734
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4734.001.patch, HDFS-4734.002.patch, 
 HDFS-4734.003.patch


 The following tests use the POSIX true/false commands which are not available 
 on Windows.
 # TestDFSHAAdmin
 # TestDFSHAAdminMiniCluster
 # TestNodeFencer
 Additionally, ShellCommandFencer has a hard-coded dependency on bash (also 
 documented at 
 https://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4734) HDFS Tests that use ShellCommandFencer are broken on Windows

2013-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644500#comment-13644500
 ] 

Hudson commented on HDFS-4734:
--

Integrated in Hadoop-Mapreduce-trunk #1414 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1414/])
HDFS-4734. HDFS Tests that use ShellCommandFencer are broken on Windows. 
Contributed by Arpit Agarwal. (Revision 1476877)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476877
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java


 HDFS Tests that use ShellCommandFencer are broken on Windows
 

 Key: HDFS-4734
 URL: https://issues.apache.org/jira/browse/HDFS-4734
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4734.001.patch, HDFS-4734.002.patch, 
 HDFS-4734.003.patch


 The following tests use the POSIX true/false commands which are not available 
 on Windows.
 # TestDFSHAAdmin
 # TestDFSHAAdminMiniCluster
 # TestNodeFencer
 Additionally, ShellCommandFencer has a hard-coded dependency on bash (also 
 documented at 
 https://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4734) HDFS Tests that use ShellCommandFencer are broken on Windows

2013-04-29 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644573#comment-13644573
 ] 

Arpit Agarwal commented on HDFS-4734:
-

Thanks Suresh and Chris for reviewing!

 HDFS Tests that use ShellCommandFencer are broken on Windows
 

 Key: HDFS-4734
 URL: https://issues.apache.org/jira/browse/HDFS-4734
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4734.001.patch, HDFS-4734.002.patch, 
 HDFS-4734.003.patch


 The following tests use the POSIX true/false commands which are not available 
 on Windows.
 # TestDFSHAAdmin
 # TestDFSHAAdminMiniCluster
 # TestNodeFencer
 Additionally, ShellCommandFencer has a hard-coded dependency on bash (also 
 documented at 
 https://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644593#comment-13644593
 ] 

Chris Nauroth commented on HDFS-4610:
-

This patch can't go to branch-2 yet.  It's dependent on HADOOP-9413, which is 
dependent on hadoop.dll for native I/O on Windows.  That code doesn't exist in 
branch-2.

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4578) Restrict snapshot IDs to 24-bits wide

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4578:
-

Priority: Minor  (was: Major)
Hadoop Flags: Reviewed

+1 patch looks good.

 Restrict snapshot IDs to 24-bits wide
 -

 Key: HDFS-4578
 URL: https://issues.apache.org/jira/browse/HDFS-4578
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4578.004.patch, HDFS-4578.patch, HDFS-4578.patch, 
 HDFS-4578.patch


 Snapshot IDs will be restricted to 24-bits. This will allow at the most 
 ~16Million snapshots globally.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4578) Restrict snapshot IDs to 24-bits wide

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4578.
--

Resolution: Fixed

I have committed this.  Thanks, Arpit!

 Restrict snapshot IDs to 24-bits wide
 -

 Key: HDFS-4578
 URL: https://issues.apache.org/jira/browse/HDFS-4578
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4578.004.patch, HDFS-4578.patch, HDFS-4578.patch, 
 HDFS-4578.patch


 Snapshot IDs will be restricted to 24-bits. This will allow at the most 
 ~16Million snapshots globally.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4773) Fix bugs in quota usage updating/computation

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644652#comment-13644652
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4773:
--

- getDiffById(final int snapshotId) and getDiff(Snapshot snapshot) are very 
similar.  Could you change getDiff to call getDiffById?

- remove the commented code in FileWithSnapshot.

- Could you also fix TestOfflineImageViewer?

 Fix bugs in quota usage updating/computation
 

 Key: HDFS-4773
 URL: https://issues.apache.org/jira/browse/HDFS-4773
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HDFS-4773.001.patch


 1. FileWithSnapshot#updateQuotaAndCollectBlocks did not consider the scenario 
 that all the snapshots has been deleted from a snapshot copy of deleted file. 
 This may lead to a divide-by-0 error.
 2. When computing the quota usage for a WithName node and its subtree, if the 
 snapshot associated with the WithName node at the time of rename operation 
 has been deleted, we should compute the quota based on the posterior snapshot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4578) Restrict snapshot IDs to 24-bits wide

2013-04-29 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644682#comment-13644682
 ] 

Arpit Agarwal commented on HDFS-4578:
-

Thanks Nicholas!

 Restrict snapshot IDs to 24-bits wide
 -

 Key: HDFS-4578
 URL: https://issues.apache.org/jira/browse/HDFS-4578
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4578.004.patch, HDFS-4578.patch, HDFS-4578.patch, 
 HDFS-4578.patch


 Snapshot IDs will be restricted to 24-bits. This will allow at the most 
 ~16Million snapshots globally.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4774) Backport HDFS-4525 'Provide an API for knowing that whether file is closed or not' to branch 1.1

2013-04-29 Thread Ted Yu (JIRA)
Ted Yu created HDFS-4774:


 Summary: Backport HDFS-4525 'Provide an API for knowing that 
whether file is closed or not' to branch 1.1
 Key: HDFS-4774
 URL: https://issues.apache.org/jira/browse/HDFS-4774
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu


HDFS-4525 compliments lease recovery API which allows user to know whether the 
recovery has completed.

This JIRA backports the API to branch 1.1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4774) Backport HDFS-4525 'Provide an API for knowing that whether file is closed or not' to branch 1.1

2013-04-29 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-4774:
-

Attachment: 4774.txt

 Backport HDFS-4525 'Provide an API for knowing that whether file is closed or 
 not' to branch 1.1
 

 Key: HDFS-4774
 URL: https://issues.apache.org/jira/browse/HDFS-4774
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 4774.txt


 HDFS-4525 compliments lease recovery API which allows user to know whether 
 the recovery has completed.
 This JIRA backports the API to branch 1.1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4733) Make HttpFS username pattern configurable

2013-04-29 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644693#comment-13644693
 ] 

Aaron T. Myers commented on HDFS-4733:
--

Thanks, Daryn. I agree with what you're saying in principle, but that seems 
like a much more involved change for this little issue. Not sure it's worth it.

+1, I'm going to commit this momentarily.

 Make HttpFS username pattern configurable
 -

 Key: HDFS-4733
 URL: https://issues.apache.org/jira/browse/HDFS-4733
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HDFS-4733.patch, HDFS-4733.patch, HDFS-4733.patch


 Continuing with the saga, valid usernames seem to be quite different across 
 unix distributions. Now running into the case of a setup where they use full 
 numeric names.
 We should make the username pattern configurable via httpfs-site.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4712) New libhdfs method hdfsGetDataNodes

2013-04-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644694#comment-13644694
 ] 

Colin Patrick McCabe commented on HDFS-4712:


bq. +dataNodeInfoList = calloc(jNumDataNodeInfos, 
sizeof(hdfsDataNodeInfo**));

Should be sizeof(hdfsDataNodeInfo*)... although it doesn't matter, since those 
pointers will be the same size :)

bq.+dataNodeInfos-location =
+ (const char*)((*env)-GetStringUTFChars(env, jVal.l, NULL));

Need to check to make sure this didn't fail.  If it failed, an exception will 
be raised.  I suggest using {{newCStr}}, which handles the exception issues for 
you.

bq. +dataNodeInfoList[i] = (hdfsDataNodeInfo*) 
malloc(sizeof(hdfsDataNodeInfo));

You need to check if this failed (we check all the other allocation sites.)

{code}
+if (jExc) {
+printExceptionAndFree( env,jExc,PRINT_EXC_ALL,
+org.apache.hadoop.hdfs.DistributedFileSystem::getDataNodeStats);
+errno = EPERM;
+goto done;
+}
+
+
+jobjectArray jDatanodeInfos = NULL;
{code}

Please don't use C99-style declarations after the start of the function.  They 
don't work well with gotos.  For example, in this case, if you follow that 
goto, jDatanodeInfos has undefined contents (which is likely to cause crashes 
when you try to clean up).  Put all the declarations at the top, except scoped 
declarations.

 New libhdfs method hdfsGetDataNodes
 ---

 Key: HDFS-4712
 URL: https://issues.apache.org/jira/browse/HDFS-4712
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: libhdfs
Reporter: andrea manzi
 Attachments: HDFS-4712.patch, hdfs.c.diff, hdfs.h.diff


 we have implemented a possible extension to libhdfs to retrieve information 
 about the available datanodes ( there was a mail on the hadoop-hdsf-dev 
 mailing list initially abut this :
 http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201204.mbox/%3CCANhO-
 s0mvororrxpjnjbql6brkj4c7l+u816xkdc+2r0whj...@mail.gmail.com%3E)
 i would like to know how to proceed to create a patch, cause on the wiki 
 http://wiki.apache.org/hadoop/HowToContribute i can see info about JAVA 
 patches but nothing related to extensions in C.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4733) Make HttpFS username pattern configurable

2013-04-29 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4733:
-

   Resolution: Fixed
Fix Version/s: 2.0.5-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2. Thanks a lot for the 
contribution, Tucu, and thanks also to Chris and Daryn for the reviews.

 Make HttpFS username pattern configurable
 -

 Key: HDFS-4733
 URL: https://issues.apache.org/jira/browse/HDFS-4733
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.5-beta

 Attachments: HDFS-4733.patch, HDFS-4733.patch, HDFS-4733.patch


 Continuing with the saga, valid usernames seem to be quite different across 
 unix distributions. Now running into the case of a setup where they use full 
 numeric names.
 We should make the username pattern configurable via httpfs-site.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4733) Make HttpFS username pattern configurable

2013-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644706#comment-13644706
 ] 

Hudson commented on HDFS-4733:
--

Integrated in Hadoop-trunk-Commit #3689 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3689/])
HDFS-4733. Make HttpFS username pattern configurable. Contributed by 
Alejandro Abdelnur. (Revision 1477237)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1477237
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/UserProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/httpfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSCustomUserName.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/wsrs/TestUserProvider.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make HttpFS username pattern configurable
 -

 Key: HDFS-4733
 URL: https://issues.apache.org/jira/browse/HDFS-4733
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.4-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.5-beta

 Attachments: HDFS-4733.patch, HDFS-4733.patch, HDFS-4733.patch


 Continuing with the saga, valid usernames seem to be quite different across 
 unix distributions. Now running into the case of a setup where they use full 
 numeric names.
 We should make the username pattern configurable via httpfs-site.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-04-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644721#comment-13644721
 ] 

Colin Patrick McCabe commented on HDFS-3934:


* properly handle DatanodeID objects where {{getIpAddr()}} and/or 
{{getHostName()}} return {{null}}.  This fixes the two failing tests.

* don't log an error when an include/exclude file is set to the empty string.  
(this is perfectly acceptable; it just means that we don't have such a file.)

* log the name of the include/exclude file we failed to read in our error 
message.

 duplicative dfs_hosts entries handled wrong
 ---

 Key: HDFS-3934
 URL: https://issues.apache.org/jira/browse/HDFS-3934
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch, 
 HDFS-3934.003.patch, HDFS-3934.004.patch, HDFS-3934.005.patch, 
 HDFS-3934.006.patch, HDFS-3934.007.patch, HDFS-3934.008.patch, 
 HDFS-3934.010.patch, HDFS-3934.011.patch, HDFS-3934.012.patch


 A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
 hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
 after the NN restarts because {{getDatanodeListForReport}} does not handle 
 such a pseudo-duplicate correctly:
 # the Remove any nodes we know about from the map loop no longer has the 
 knowledge to remove the spurious entries
 # the The remaining nodes are ones that are referenced by the hosts files 
 loop does not do hostname lookups, so does not know that the IP and hostname 
 refer to the same host.
 Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
 the JSP output:  The *Node* column shows :50010 as the nodename, with HTML 
 markup {{a 
 href=http://:50075/browseDirectory.jsp?namenodeInfoPort=50070amp;dir=%2Famp;nnaddr=172.29.97.196:8020;
  title=172.29.97.216:50010:50010/a}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-04-29 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3934:
---

Attachment: HDFS-3934.012.patch

 duplicative dfs_hosts entries handled wrong
 ---

 Key: HDFS-3934
 URL: https://issues.apache.org/jira/browse/HDFS-3934
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch, 
 HDFS-3934.003.patch, HDFS-3934.004.patch, HDFS-3934.005.patch, 
 HDFS-3934.006.patch, HDFS-3934.007.patch, HDFS-3934.008.patch, 
 HDFS-3934.010.patch, HDFS-3934.011.patch, HDFS-3934.012.patch


 A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
 hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
 after the NN restarts because {{getDatanodeListForReport}} does not handle 
 such a pseudo-duplicate correctly:
 # the Remove any nodes we know about from the map loop no longer has the 
 knowledge to remove the spurious entries
 # the The remaining nodes are ones that are referenced by the hosts files 
 loop does not do hostname lookups, so does not know that the IP and hostname 
 refer to the same host.
 Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
 the JSP output:  The *Node* column shows :50010 as the nodename, with HTML 
 markup {{a 
 href=http://:50075/browseDirectory.jsp?namenodeInfoPort=50070amp;dir=%2Famp;nnaddr=172.29.97.196:8020;
  title=172.29.97.216:50010:50010/a}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4775) Fix HA documentation of ShellCommandFencer for Windows

2013-04-29 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-4775:
---

 Summary: Fix HA documentation of ShellCommandFencer for Windows
 Key: HDFS-4775
 URL: https://issues.apache.org/jira/browse/HDFS-4775
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0


ShellCommandFencer documentation states that it uses bash. Update it to mention 
that bash is no longer required on Windows and it uses cmd.exe instead. Fencer 
scripts must be valid cmd.exe scripts.

https://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4773) Fix bugs in quota usage updating/computation

2013-04-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4773:


Attachment: HDFS-4773.002.patch

Update the patch to address Nicholas's comments.

 Fix bugs in quota usage updating/computation
 

 Key: HDFS-4773
 URL: https://issues.apache.org/jira/browse/HDFS-4773
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HDFS-4773.001.patch, HDFS-4773.002.patch


 1. FileWithSnapshot#updateQuotaAndCollectBlocks did not consider the scenario 
 that all the snapshots has been deleted from a snapshot copy of deleted file. 
 This may lead to a divide-by-0 error.
 2. When computing the quota usage for a WithName node and its subtree, if the 
 snapshot associated with the WithName node at the time of rename operation 
 has been deleted, we should compute the quota based on the posterior snapshot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4698) provide client-side metrics for remote reads, local reads, and short-circuit reads

2013-04-29 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4698:
---

Attachment: HDFS-4698.003.patch

When creating {{DatanodeID}} objects,  {{JspHelper}} needs to use 
{{InetAddress#getHostAddress}}, not {{InetAddress#toString}}.  The latter 
returns stuff like {{keter/127.0.0.1}} which can't be parsed by 
{{NetUtils#createSocketAddr}} (and probably much else).

 provide client-side metrics for remote reads, local reads, and short-circuit 
 reads
 --

 Key: HDFS-4698
 URL: https://issues.apache.org/jira/browse/HDFS-4698
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4698.001.patch, HDFS-4698.002.patch, 
 HDFS-4698.003.patch


 We should provide metrics to let clients know how many bytes of data they 
 have read remotely, versus locally or via short-circuit local reads.  This 
 will allow clients to know how well they're doing at bringing the computation 
 to the data, which will be useful in evaluating placement policies and 
 cluster configurations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4300) TransferFsImage.downloadEditsToStorage should use a tmp file for destination

2013-04-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4300:
--

Attachment: hdfs-4300-2.patch

Thanks for the review Colin. This newest patch uses a temp filename based on 
{{getCurrentTimeMillis}}, which is nice for debugging and I think unique 
enough. We additionally do temp file cleanup on startup in 
{{SecondaryNameNode#initialize}}.

 TransferFsImage.downloadEditsToStorage should use a tmp file for destination
 

 Key: HDFS-4300
 URL: https://issues.apache.org/jira/browse/HDFS-4300
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Critical
 Attachments: hdfs-4300-1.patch, hdfs-4300-2.patch


 Currently, in TransferFsImage.downloadEditsToStorage, we download the edits 
 file directly to its finalized path. So, if the transfer fails in the middle, 
 a half-written file is left and cannot be distinguished from a correct file. 
 So, future checkpoints by the 2NN will fail, since the file is truncated in 
 the middle -- but it won't ever download a good copy because it thinks it 
 already has the proper file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4776) Backport SecondaryNameNode web ui to branch-1

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HDFS-4776:


 Summary: Backport SecondaryNameNode web ui to branch-1
 Key: HDFS-4776
 URL: https://issues.apache.org/jira/browse/HDFS-4776
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


The related JIRAs are
- HADOOP-3741: SecondaryNameNode has http server on dfs.secondary.http.address 
but without any contents 
- HDFS-1728: SecondaryNameNode.checkpointSize is in byte but not MB.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4776) Backport SecondaryNameNode web ui to branch-1

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4776:
-

Attachment: h4776_20130429.patch

h4776_20130429.patch

 Backport SecondaryNameNode web ui to branch-1
 -

 Key: HDFS-4776
 URL: https://issues.apache.org/jira/browse/HDFS-4776
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: h4776_20130429.patch


 The related JIRAs are
 - HADOOP-3741: SecondaryNameNode has http server on 
 dfs.secondary.http.address but without any contents 
 - HDFS-1728: SecondaryNameNode.checkpointSize is in byte but not MB.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4776) Backport SecondaryNameNode web ui to branch-1

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4776:
-

Attachment: (was: h4776_20130429.patch)

 Backport SecondaryNameNode web ui to branch-1
 -

 Key: HDFS-4776
 URL: https://issues.apache.org/jira/browse/HDFS-4776
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: h4776_20130429.patch


 The related JIRAs are
 - HADOOP-3741: SecondaryNameNode has http server on 
 dfs.secondary.http.address but without any contents 
 - HDFS-1728: SecondaryNameNode.checkpointSize is in byte but not MB.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4776) Backport SecondaryNameNode web ui to branch-1

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4776:
-

Attachment: h4776_20130429.patch

 Backport SecondaryNameNode web ui to branch-1
 -

 Key: HDFS-4776
 URL: https://issues.apache.org/jira/browse/HDFS-4776
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: h4776_20130429.patch


 The related JIRAs are
 - HADOOP-3741: SecondaryNameNode has http server on 
 dfs.secondary.http.address but without any contents 
 - HDFS-1728: SecondaryNameNode.checkpointSize is in byte but not MB.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644835#comment-13644835
 ] 

Hadoop QA commented on HDFS-3934:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581007/HDFS-3934.012.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4336//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4336//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4336//console

This message is automatically generated.

 duplicative dfs_hosts entries handled wrong
 ---

 Key: HDFS-3934
 URL: https://issues.apache.org/jira/browse/HDFS-3934
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch, 
 HDFS-3934.003.patch, HDFS-3934.004.patch, HDFS-3934.005.patch, 
 HDFS-3934.006.patch, HDFS-3934.007.patch, HDFS-3934.008.patch, 
 HDFS-3934.010.patch, HDFS-3934.011.patch, HDFS-3934.012.patch


 A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
 hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
 after the NN restarts because {{getDatanodeListForReport}} does not handle 
 such a pseudo-duplicate correctly:
 # the Remove any nodes we know about from the map loop no longer has the 
 knowledge to remove the spurious entries
 # the The remaining nodes are ones that are referenced by the hosts files 
 loop does not do hostname lookups, so does not know that the IP and hostname 
 refer to the same host.
 Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
 the JSP output:  The *Node* column shows :50010 as the nodename, with HTML 
 markup {{a 
 href=http://:50075/browseDirectory.jsp?namenodeInfoPort=50070amp;dir=%2Famp;nnaddr=172.29.97.196:8020;
  title=172.29.97.216:50010:50010/a}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4687) TestDelegationTokenForProxyUser#testWebHdfsDoAs is flaky with JDK7

2013-04-29 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644862#comment-13644862
 ] 

Aaron T. Myers commented on HDFS-4687:
--

+1, the patch looks good to me. I ran this test in isolation with and without 
the patch and confirmed that it works as expected.

I'm going to commit this momentarily.

 TestDelegationTokenForProxyUser#testWebHdfsDoAs is flaky with JDK7
 --

 Key: HDFS-4687
 URL: https://issues.apache.org/jira/browse/HDFS-4687
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4687-1.patch


 Fails fairly often with JDK7.
 {noformat}
 Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
 Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.623 sec  
 FAILURE!
 testWebHdfsDoAs(org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser)
   Time elapsed: 4765 sec   FAILURE!
 java.lang.AssertionError: expected:200 but was:401
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.failNotEquals(Assert.java:645)
   at org.junit.Assert.assertEquals(Assert.java:126)
   at org.junit.Assert.assertEquals(Assert.java:470)
   at org.junit.Assert.assertEquals(Assert.java:454)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsTestUtil.connectAndGetJson(WebHdfsTestUtil.java:78)
   at 
 org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser.testWebHdfsDoAs(TestDelegationTokenForProxyUser.java:174)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4687) TestDelegationTokenForProxyUser#testWebHdfsDoAs is flaky with JDK7

2013-04-29 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4687:
-

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this patch to trunk.

Thanks a lot for the contribution, Andrew, and thanks also to Arpit for the 
review.

 TestDelegationTokenForProxyUser#testWebHdfsDoAs is flaky with JDK7
 --

 Key: HDFS-4687
 URL: https://issues.apache.org/jira/browse/HDFS-4687
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Fix For: 3.0.0

 Attachments: hdfs-4687-1.patch


 Fails fairly often with JDK7.
 {noformat}
 Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
 Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.623 sec  
 FAILURE!
 testWebHdfsDoAs(org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser)
   Time elapsed: 4765 sec   FAILURE!
 java.lang.AssertionError: expected:200 but was:401
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.failNotEquals(Assert.java:645)
   at org.junit.Assert.assertEquals(Assert.java:126)
   at org.junit.Assert.assertEquals(Assert.java:470)
   at org.junit.Assert.assertEquals(Assert.java:454)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsTestUtil.connectAndGetJson(WebHdfsTestUtil.java:78)
   at 
 org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser.testWebHdfsDoAs(TestDelegationTokenForProxyUser.java:174)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4687) TestDelegationTokenForProxyUser#testWebHdfsDoAs is flaky with JDK7

2013-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644870#comment-13644870
 ] 

Hudson commented on HDFS-4687:
--

Integrated in Hadoop-trunk-Commit #3690 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3690/])
HDFS-4687. TestDelegationTokenForProxyUser#testWebHdfsDoAs is flaky with 
JDK7. Contributed by Andrew Wang. (Revision 1477344)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1477344
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java


 TestDelegationTokenForProxyUser#testWebHdfsDoAs is flaky with JDK7
 --

 Key: HDFS-4687
 URL: https://issues.apache.org/jira/browse/HDFS-4687
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Fix For: 3.0.0

 Attachments: hdfs-4687-1.patch


 Fails fairly often with JDK7.
 {noformat}
 Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
 Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.623 sec  
 FAILURE!
 testWebHdfsDoAs(org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser)
   Time elapsed: 4765 sec   FAILURE!
 java.lang.AssertionError: expected:200 but was:401
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.failNotEquals(Assert.java:645)
   at org.junit.Assert.assertEquals(Assert.java:126)
   at org.junit.Assert.assertEquals(Assert.java:470)
   at org.junit.Assert.assertEquals(Assert.java:454)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsTestUtil.connectAndGetJson(WebHdfsTestUtil.java:78)
   at 
 org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser.testWebHdfsDoAs(TestDelegationTokenForProxyUser.java:174)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please 

[jira] [Commented] (HDFS-4698) provide client-side metrics for remote reads, local reads, and short-circuit reads

2013-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644876#comment-13644876
 ] 

Hadoop QA commented on HDFS-4698:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581027/HDFS-4698.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4337//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4337//console

This message is automatically generated.

 provide client-side metrics for remote reads, local reads, and short-circuit 
 reads
 --

 Key: HDFS-4698
 URL: https://issues.apache.org/jira/browse/HDFS-4698
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.0.4-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4698.001.patch, HDFS-4698.002.patch, 
 HDFS-4698.003.patch


 We should provide metrics to let clients know how many bytes of data they 
 have read remotely, versus locally or via short-circuit local reads.  This 
 will allow clients to know how well they're doing at bringing the computation 
 to the data, which will be useful in evaluating placement policies and 
 cluster configurations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644880#comment-13644880
 ] 

Suresh Srinivas commented on HDFS-4489:
---

Here is NNBench for delete operations (run with 100 threads simultaneously 
running:
||Opertaions||Elapsed||OpsPerSec||AvgTim||
|10|19243|5196.694902|19|
|10|18598|5376.92225|18|
|10|17819|5611.987205|17|
|10|17953|5570.099705|17|
|10|18077|5531.891354|18|
|10|17948|5571.651437|17|
|10|18080|5530.973451|18|
|10|18032|5545.696539|18|
|10|18431|5425.641582|18|
|10|17735|5638.567804|17|
|10|1819|.6 5500|012623 17.7|

||Opertaions||Elapsed||OpsPerSec||AvgTim||
|10|18029|5546.619336|17|
|10|18527|5397.527932|18|
|10|18164|5505.395287|18|
|10|18486|5409.49908|18|
|10|18053|5539.24|18|
|10|18313|5460.601758|18|
|10|18299|5464.779496|18|
|10|17878|5593.466831|17|
|10|18178|5501.155243|18|
|10|18084|5529.750055|18|
|10|1820|.1 5494|804057 17.8|



 Use InodeID as as an identifier of a file in HDFS protocols and APIs
 

 Key: HDFS-4489
 URL: https://issues.apache.org/jira/browse/HDFS-4489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.0.5-beta

 Attachments: 4434.optimized.patch


 The benefit of using InodeID to uniquely identify a file can be multiple 
 folds. Here are a few of them:
 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
 HDFS-4437.
 2. modification checks in tools like distcp. Since a file could have been 
 replaced or renamed to, the file name and size combination is no t reliable, 
 but the combination of file id and size is unique.
 3. id based protocol support (e.g., NFS)
 4. to make the pluggable block placement policy use fileid instead of 
 filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644880#comment-13644880
 ] 

Suresh Srinivas edited comment on HDFS-4489 at 4/29/13 9:25 PM:


Here is NNBench for delete operations (run with 100 threads simultaneously 
running:
||Opertaions||Elapsed||OpsPerSec||AvgTim||
|10|19243|5196.694902|19|
|10|18598|5376.92225|18|
|10|17819|5611.987205|17|
|10|17953|5570.099705|17|
|10|18077|5531.891354|18|
|10|17948|5571.651437|17|
|10|18080|5530.973451|18|
|10|18032|5545.696539|18|
|10|18431|5425.641582|18|
|10|17735|5638.567804|17|
|10|1819|5500.01262|17.7|

||Opertaions||Elapsed||OpsPerSec||AvgTim||
|10|18029|5546.619336|17|
|10|18527|5397.527932|18|
|10|18164|5505.395287|18|
|10|18486|5409.49908|18|
|10|18053|5539.24|18|
|10|18313|5460.601758|18|
|10|18299|5464.779496|18|
|10|17878|5593.466831|17|
|10|18178|5501.155243|18|
|10|18084|5529.750055|18|
|10|1820.1|5494.804057|17.7|



  was (Author: sureshms):
Here is NNBench for delete operations (run with 100 threads simultaneously 
running:
||Opertaions||Elapsed||OpsPerSec||AvgTim||
|10|19243|5196.694902|19|
|10|18598|5376.92225|18|
|10|17819|5611.987205|17|
|10|17953|5570.099705|17|
|10|18077|5531.891354|18|
|10|17948|5571.651437|17|
|10|18080|5530.973451|18|
|10|18032|5545.696539|18|
|10|18431|5425.641582|18|
|10|17735|5638.567804|17|
|10|1819|.6 5500|012623 17.7|

||Opertaions||Elapsed||OpsPerSec||AvgTim||
|10|18029|5546.619336|17|
|10|18527|5397.527932|18|
|10|18164|5505.395287|18|
|10|18486|5409.49908|18|
|10|18053|5539.24|18|
|10|18313|5460.601758|18|
|10|18299|5464.779496|18|
|10|17878|5593.466831|17|
|10|18178|5501.155243|18|
|10|18084|5529.750055|18|
|10|1820|.1 5494|804057 17.8|


  
 Use InodeID as as an identifier of a file in HDFS protocols and APIs
 

 Key: HDFS-4489
 URL: https://issues.apache.org/jira/browse/HDFS-4489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.0.5-beta

 Attachments: 4434.optimized.patch


 The benefit of using InodeID to uniquely identify a file can be multiple 
 folds. Here are a few of them:
 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
 HDFS-4437.
 2. modification checks in tools like distcp. Since a file could have been 
 replaced or renamed to, the file name and size combination is no t reliable, 
 but the combination of file id and size is unique.
 3. id based protocol support (e.g., NFS)
 4. to make the pluggable block placement policy use fileid instead of 
 filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4305) Add a configurable limit on number of blocks per file, and min block size

2013-04-29 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644884#comment-13644884
 ] 

Aaron T. Myers commented on HDFS-4305:
--

Thanks for confirming you're OK with this, Suresh.

+1, the latest patch looks good to me. I agree that the test failure seems 
unrelated. I'm going to commit this momentarily.

 Add a configurable limit on number of blocks per file, and min block size
 -

 Key: HDFS-4305
 URL: https://issues.apache.org/jira/browse/HDFS-4305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.0.4, 2.0.4-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch


 We recently had an issue where a user set the block size very very low and 
 managed to create a single file with hundreds of thousands of blocks. This 
 caused problems with the edit log since the OP_ADD op was so large 
 (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To 
 prevent users from making such mistakes, we should:
 - introduce a configurable minimum block size, below which requests are 
 rejected
 - introduce a configurable maximum number of blocks per file, above which 
 requests to add another block are rejected (with a suitably high default as 
 to not prevent legitimate large files)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644898#comment-13644898
 ] 

Suresh Srinivas commented on HDFS-4489:
---

Summary of results in the tests:
# File dreate tests- perform additional reserved name processing, inode map 
addition and reserved name check. This is where maximum additional work from 
the patch is being done.
#* In the mirco benchmark by just calling create file related methods, the time 
went from 19235.8 to 19789.2 roughly 2.8% different. This can be further 
reduced by turning off map to 1.3%. The patch moves splitting paths into 
components outside the lock. Based on this, further optimizations are possible 
that improves throughput by reducing the synchronized sections. The end result 
with that optimizations can make running times much smaller that what it is 
today.
#* I would also point out that, this is a micro benchmark. The % difference 
observed in this will be dwarfed by RPC times, network round trip time etc. 
Also the system will spend time on other operations which should not be 
affected by this patch.
# File delete tests - performs reseved name processing and only inode map 
deletion.
#* There very little difference in bench mark results.

 Use InodeID as as an identifier of a file in HDFS protocols and APIs
 

 Key: HDFS-4489
 URL: https://issues.apache.org/jira/browse/HDFS-4489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.0.5-beta

 Attachments: 4434.optimized.patch


 The benefit of using InodeID to uniquely identify a file can be multiple 
 folds. Here are a few of them:
 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
 HDFS-4437.
 2. modification checks in tools like distcp. Since a file could have been 
 replaced or renamed to, the file name and size combination is no t reliable, 
 but the combination of file id and size is unique.
 3. id based protocol support (e.g., NFS)
 4. to make the pluggable block placement policy use fileid instead of 
 filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4305) Add a configurable limit on number of blocks per file, and min block size

2013-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644902#comment-13644902
 ] 

Hudson commented on HDFS-4305:
--

Integrated in Hadoop-trunk-Commit #3691 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3691/])
HDFS-4305. Add a configurable limit on number of blocks per file, and min 
block size. Contributed by Andrew Wang. (Revision 1477354)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1477354
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileLimit.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hdfs-site.xml


 Add a configurable limit on number of blocks per file, and min block size
 -

 Key: HDFS-4305
 URL: https://issues.apache.org/jira/browse/HDFS-4305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.0.4, 2.0.4-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch


 We recently had an issue where a user set the block size very very low and 
 managed to create a single file with hundreds of thousands of blocks. This 
 caused problems with the edit log since the OP_ADD op was so large 
 (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To 
 prevent users from making such mistakes, we should:
 - introduce a configurable minimum block size, below which requests are 
 rejected
 - introduce a configurable maximum number of blocks per file, above which 
 requests to add another block are rejected (with a suitably high default as 
 to not prevent legitimate large files)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644907#comment-13644907
 ] 

Suresh Srinivas commented on HDFS-4489:
---

Given the above tests, here are all the issues that are brought up:
# Introducing incompatible change
#* This is not a major incompatibility. As I said earlier, creating file or 
directory /.reserved is not allowed. That said, this should get into 2.0.5 
given its main goal is compatibility.
# This patch could be destabilizing
#* This patch is adding an Inode map and support for path scheme which allows 
addressing files by inodes. Most of the code added in this patch is to support 
the new addressing mechanisms and extensive unit tests associated with it. The 
regular code path should largely be unaffected by this, with exception of 
adding and deleting entries in inode map. Please bring up any concerns that I 
might have overlooked.
# Performance impact - based on the results, there is a very little performance 
impact. I have two options:
#* The difference observed in microbenchmarks amounts to much smaller 
difference in a real system. That too only associated with a few write 
operations such as create. Hence is it acceptable.
#* Make further optimizations to reduce synchronized section size based on the 
mechanism added in this patch. [~nroberts] if you feel this is important, I 
will undertake the work of optimizing this. [~daryn] also had expressed 
interest in it. Not sure if he has the bandwidth.

Given this, I would like to merge this in branch-2.0.5. I hope concerns 
expressed by people are addressed.

 Use InodeID as as an identifier of a file in HDFS protocols and APIs
 

 Key: HDFS-4489
 URL: https://issues.apache.org/jira/browse/HDFS-4489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.0.5-beta

 Attachments: 4434.optimized.patch


 The benefit of using InodeID to uniquely identify a file can be multiple 
 folds. Here are a few of them:
 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
 HDFS-4437.
 2. modification checks in tools like distcp. Since a file could have been 
 replaced or renamed to, the file name and size combination is no t reliable, 
 but the combination of file id and size is unique.
 3. id based protocol support (e.g., NFS)
 4. to make the pluggable block placement policy use fileid instead of 
 filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4305) Add a configurable limit on number of blocks per file, and min block size

2013-04-29 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4305:
-

   Resolution: Fixed
Fix Version/s: 2.0.5-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the contribution, Andrew.

 Add a configurable limit on number of blocks per file, and min block size
 -

 Key: HDFS-4305
 URL: https://issues.apache.org/jira/browse/HDFS-4305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.0.4, 2.0.4-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch


 We recently had an issue where a user set the block size very very low and 
 managed to create a single file with hundreds of thousands of blocks. This 
 caused problems with the edit log since the OP_ADD op was so large 
 (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To 
 prevent users from making such mistakes, we should:
 - introduce a configurable minimum block size, below which requests are 
 rejected
 - introduce a configurable maximum number of blocks per file, above which 
 requests to add another block are rejected (with a suitably high default as 
 to not prevent legitimate large files)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4300) TransferFsImage.downloadEditsToStorage should use a tmp file for destination

2013-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644936#comment-13644936
 ] 

Hadoop QA commented on HDFS-4300:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581028/hdfs-4300-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4338//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4338//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4338//console

This message is automatically generated.

 TransferFsImage.downloadEditsToStorage should use a tmp file for destination
 

 Key: HDFS-4300
 URL: https://issues.apache.org/jira/browse/HDFS-4300
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Critical
 Attachments: hdfs-4300-1.patch, hdfs-4300-2.patch


 Currently, in TransferFsImage.downloadEditsToStorage, we download the edits 
 file directly to its finalized path. So, if the transfer fails in the middle, 
 a half-written file is left and cannot be distinguished from a correct file. 
 So, future checkpoints by the 2NN will fail, since the file is truncated in 
 the middle -- but it won't ever download a good copy because it thinks it 
 already has the proper file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4773) Fix bugs in quota usage updating/computation

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4773:
-

Hadoop Flags: Reviewed

+1 patch looks good.

 Fix bugs in quota usage updating/computation
 

 Key: HDFS-4773
 URL: https://issues.apache.org/jira/browse/HDFS-4773
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HDFS-4773.001.patch, HDFS-4773.002.patch, 
 HDFS-4773.003.patch


 1. FileWithSnapshot#updateQuotaAndCollectBlocks did not consider the scenario 
 that all the snapshots has been deleted from a snapshot copy of deleted file. 
 This may lead to a divide-by-0 error.
 2. When computing the quota usage for a WithName node and its subtree, if the 
 snapshot associated with the WithName node at the time of rename operation 
 has been deleted, we should compute the quota based on the posterior snapshot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4773) Fix bugs in quota usage updating/computation

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4773.
--

   Resolution: Fixed
Fix Version/s: Snapshot (HDFS-2802)

I have committed this.  Thanks, Jing!

 Fix bugs in quota usage updating/computation
 

 Key: HDFS-4773
 URL: https://issues.apache.org/jira/browse/HDFS-4773
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4773.001.patch, HDFS-4773.002.patch, 
 HDFS-4773.003.patch


 1. FileWithSnapshot#updateQuotaAndCollectBlocks did not consider the scenario 
 that all the snapshots has been deleted from a snapshot copy of deleted file. 
 This may lead to a divide-by-0 error.
 2. When computing the quota usage for a WithName node and its subtree, if the 
 snapshot associated with the WithName node at the time of rename operation 
 has been deleted, we should compute the quota based on the posterior snapshot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-29 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HDFS-4610.
---

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed

+1 for the patch. I committed it to trunk.

Thank you Ivan. Thanks to Arpit and Chris for the reviews.

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644983#comment-13644983
 ] 

Suresh Srinivas commented on HDFS-4610:
---

Since this patch does not have +1 from Jenkins, I am reverting the patch.

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644986#comment-13644986
 ] 

Hudson commented on HDFS-4610:
--

Integrated in Hadoop-trunk-Commit #3692 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3692/])
HDFS-4610. Use common utils FileUtil#setReadable/Writable/Executable  
FileUtil#canRead/Write/Execute. Contributed by Ivan Mitic. (Revision 1477385)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1477385
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImagePreTransactionalStorageInspector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNStorageRetentionFunctional.java


 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-29 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas reopened HDFS-4610:
---


 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-29 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4610:


Status: Patch Available  (was: Reopened)

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4774) Backport HDFS-4525 'Provide an API for knowing that whether file is closed or not' to branch 1.1

2013-04-29 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644991#comment-13644991
 ] 

Ted Yu commented on HDFS-4774:
--

I got the following when running test suite:
{code}
[junit] Test org.apache.hadoop.hdfs.TestFileAppend4 FAILED
[junit] Test org.apache.hadoop.hdfs.TestLargeBlock FAILED (timeout)
[junit] Test org.apache.hadoop.metrics2.impl.TestSinkQueue FAILED
[junit] Test org.apache.hadoop.streaming.TestUlimit FAILED
[junit] Test org.apache.hadoop.mapred.TestFairSchedulerPoolNames FAILED
{code}
They all passed when run standalone.

 Backport HDFS-4525 'Provide an API for knowing that whether file is closed or 
 not' to branch 1.1
 

 Key: HDFS-4774
 URL: https://issues.apache.org/jira/browse/HDFS-4774
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 4774.txt


 HDFS-4525 compliments lease recovery API which allows user to know whether 
 the recovery has completed.
 This JIRA backports the API to branch 1.1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644994#comment-13644994
 ] 

Hudson commented on HDFS-4610:
--

Integrated in Hadoop-trunk-Commit #3693 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3693/])
HDFS-4610. Reverting the patch Jenkins build is not run. (Revision 1477396)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1477396
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImagePreTransactionalStorageInspector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNStorageRetentionFunctional.java


 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4758) Disallow nested snapshottable directories

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4758:
-

Attachment: h4758_20140429.patch

h4758_20140429.patch: check the snapshottable directories stored in 
SnapshotManager

 Disallow nested snapshottable directories
 -

 Key: HDFS-4758
 URL: https://issues.apache.org/jira/browse/HDFS-4758
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4758_20140426.patch, h4758_20140429.patch


 Nested snapshottable directories are supported by the current implementation. 
  However, it seems that there are no good use cases for nested snapshottable 
 directories.  So we disable it for now until someone has a valid use case for 
 it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4777) File creation code in namenode is incorrectly synchronized

2013-04-29 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HDFS-4777:
-

 Summary: File creation code in namenode is incorrectly synchronized
 Key: HDFS-4777
 URL: https://issues.apache.org/jira/browse/HDFS-4777
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.0-alpha, 0.23.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker


FSNamesystem#startFileInternal calls delete. Delete method releases the write 
lock, making parts of startFileInternal code unintentionally executed without 
write lock being held.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4777) File creation code in namenode is incorrectly synchronized

2013-04-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645032#comment-13645032
 ] 

Suresh Srinivas commented on HDFS-4777:
---

FSNamesystem calls delete:
{code}
// File exists - must be one of append or overwrite
if (overwrite) {
  delete(src, true);
} else {
{code}

and {{deleteInternal()}} release the lock.

 File creation code in namenode is incorrectly synchronized
 --

 Key: HDFS-4777
 URL: https://issues.apache.org/jira/browse/HDFS-4777
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker

 FSNamesystem#startFileInternal calls delete. Delete method releases the write 
 lock, making parts of startFileInternal code unintentionally executed without 
 write lock being held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4687) TestDelegationTokenForProxyUser#testWebHdfsDoAs is flaky with JDK7

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4687:
-

Component/s: (was: webhdfs)
 test
   Priority: Minor  (was: Major)

 TestDelegationTokenForProxyUser#testWebHdfsDoAs is flaky with JDK7
 --

 Key: HDFS-4687
 URL: https://issues.apache.org/jira/browse/HDFS-4687
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 3.0.0

 Attachments: hdfs-4687-1.patch


 Fails fairly often with JDK7.
 {noformat}
 Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
 Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.623 sec  
 FAILURE!
 testWebHdfsDoAs(org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser)
   Time elapsed: 4765 sec   FAILURE!
 java.lang.AssertionError: expected:200 but was:401
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.failNotEquals(Assert.java:645)
   at org.junit.Assert.assertEquals(Assert.java:126)
   at org.junit.Assert.assertEquals(Assert.java:470)
   at org.junit.Assert.assertEquals(Assert.java:454)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsTestUtil.connectAndGetJson(WebHdfsTestUtil.java:78)
   at 
 org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser.testWebHdfsDoAs(TestDelegationTokenForProxyUser.java:174)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4300) TransferFsImage.downloadEditsToStorage should use a tmp file for destination

2013-04-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4300:
--

Attachment: hdfs-4300-3.patch

Fix the findbugs warning.

 TransferFsImage.downloadEditsToStorage should use a tmp file for destination
 

 Key: HDFS-4300
 URL: https://issues.apache.org/jira/browse/HDFS-4300
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Critical
 Attachments: hdfs-4300-1.patch, hdfs-4300-2.patch, hdfs-4300-3.patch


 Currently, in TransferFsImage.downloadEditsToStorage, we download the edits 
 file directly to its finalized path. So, if the transfer fails in the middle, 
 a half-written file is left and cannot be distinguished from a correct file. 
 So, future checkpoints by the 2NN will fail, since the file is truncated in 
 the middle -- but it won't ever download a good copy because it thinks it 
 already has the proper file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4758) Disallow nested snapshottable directories

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4758:
-

Attachment: h4758_20140429b.patch

h4758_20140429b.patch: fixes test failures.

 Disallow nested snapshottable directories
 -

 Key: HDFS-4758
 URL: https://issues.apache.org/jira/browse/HDFS-4758
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h4758_20140426.patch, h4758_20140429b.patch, 
 h4758_20140429.patch


 Nested snapshottable directories are supported by the current implementation. 
  However, it seems that there are no good use cases for nested snapshottable 
 directories.  So we disable it for now until someone has a valid use case for 
 it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645083#comment-13645083
 ] 

Hadoop QA commented on HDFS-4610:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12580927/HDFS-4610.commonfileutils.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4339//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4339//console

This message is automatically generated.

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4777) File creation code in namenode is incorrectly synchronized

2013-04-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645084#comment-13645084
 ] 

Suresh Srinivas commented on HDFS-4777:
---

Digging a little bit deeper, it turns out that my understanding of the lock is 
not correct. Here is how it works:
# FSNameystem#startFileInt - acquires writeLock - the hold count for the lock 
is 1
# FSNamesystem#deleteInternal - acquires writeLock again - since the thread 
holds the lock already the hold count is incremented to 2
# FSNamesystem#deleteInternal - releases writeLock. Hold count is decremented 
to 1
# FSNamesystem#deleteInternal - calls logSync(). {color:red}The editlog logSync 
now is being done with the FSNamesystem lock held{color}
# FSNamesystem#startFileInt release writeLock. Lock hold count is down to zero 
and it is released.

Based on this I am going to update the jira header.

 File creation code in namenode is incorrectly synchronized
 --

 Key: HDFS-4777
 URL: https://issues.apache.org/jira/browse/HDFS-4777
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker

 FSNamesystem#startFileInternal calls delete. Delete method releases the write 
 lock, making parts of startFileInternal code unintentionally executed without 
 write lock being held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4777) File creation with overwrite flag set to true results in logSync holding namesystem lock

2013-04-29 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4777:
--

Summary: File creation with overwrite flag set to true results in logSync 
holding namesystem lock  (was: File creation code in namenode is incorrectly 
synchronized)

 File creation with overwrite flag set to true results in logSync holding 
 namesystem lock
 

 Key: HDFS-4777
 URL: https://issues.apache.org/jira/browse/HDFS-4777
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker

 FSNamesystem#startFileInternal calls delete. Delete method releases the write 
 lock, making parts of startFileInternal code unintentionally executed without 
 write lock being held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645090#comment-13645090
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4489:
--

The performance numbers look good.  Since the rpc time is not counted, a small 
percentage difference is nothing.  Beside, the Inode ID feature is very useful. 
 It also helps implementing the Snapshot feature.

+1 on merging it to branch-2.

 Use InodeID as as an identifier of a file in HDFS protocols and APIs
 

 Key: HDFS-4489
 URL: https://issues.apache.org/jira/browse/HDFS-4489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.0.5-beta

 Attachments: 4434.optimized.patch


 The benefit of using InodeID to uniquely identify a file can be multiple 
 folds. Here are a few of them:
 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
 HDFS-4437.
 2. modification checks in tools like distcp. Since a file could have been 
 replaced or renamed to, the file name and size combination is no t reliable, 
 but the combination of file id and size is unique.
 3. id based protocol support (e.g., NFS)
 4. to make the pluggable block placement policy use fileid instead of 
 filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-29 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4610:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed the patch to trunk.

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4776) Backport SecondaryNameNode web ui to branch-1

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645100#comment-13645100
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4776:
--

{noformat}
 [exec] -1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no tests are needed for 
this patch.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] -1 findbugs.  The patch appears to introduce 19 new Findbugs 
(version 1.3.9) warnings.
{noformat}
The findbugs warnings are not related to the patch.

 Backport SecondaryNameNode web ui to branch-1
 -

 Key: HDFS-4776
 URL: https://issues.apache.org/jira/browse/HDFS-4776
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: h4776_20130429.patch


 The related JIRAs are
 - HADOOP-3741: SecondaryNameNode has http server on 
 dfs.secondary.http.address but without any contents 
 - HDFS-1728: SecondaryNameNode.checkpointSize is in byte but not MB.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4760) Update inodeMap after node replacement

2013-04-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4760:


Attachment: HDFS-4760.004.patch

Update the patch: update the inodeMap in INodeDirectory#replaceSelf and 
INodeDirectory#replaceChild.

 Update inodeMap after node replacement
 --

 Key: HDFS-4760
 URL: https://issues.apache.org/jira/browse/HDFS-4760
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-4760.001.patch, HDFS-4760.002.patch, 
 HDFS-4760.003.patch, HDFS-4760.004.patch


 Similar with HDFS-4757, we need to update the inodeMap after node 
 replacement. Because a lot of node replacement happens in the snapshot branch 
 (e.g., INodeDirectory = INodeDirectoryWithSnapshot, INodeDirectory = 
 INodeDirectorySnapshottable, INodeFile = INodeFileWithSnapshot ...), this 
 becomes a non-trivial issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645104#comment-13645104
 ] 

Hudson commented on HDFS-4610:
--

Integrated in Hadoop-trunk-Commit #3696 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3696/])
HDFS-4610. Use common utils FileUtil#setReadable/Writable/Executable and 
FileUtil#canRead/Write/Execute. Contrbitued by Ivan Mitic. (Revision 1477427)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1477427
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImagePreTransactionalStorageInspector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNStorageRetentionFunctional.java


 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4765) Permission check of symlink deletion incorrectly throws UnresolvedLinkException

2013-04-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4765:
--

Attachment: hdfs-4765-1.patch

Patch attached. In the case of delete, we call the permission check without 
resolving a symlink if it's the last component.

Verified fail before/pass after behavior with the included unit test.

 Permission check of symlink deletion incorrectly throws 
 UnresolvedLinkException
 ---

 Key: HDFS-4765
 URL: https://issues.apache.org/jira/browse/HDFS-4765
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4765-1.patch


 With permissions enabled, the permission check in {{FSNamesystem#delete}} 
 will incorrectly throw an UnresolvedLinkException if the path contains a 
 symlink. This leads to FileContext resolving the symlink and instead deleting 
 the link target.
 The correct check is to see if the user has write permissions on the parent 
 directory of the symlink, e.g.
 {noformat}
 - % ls -ld symtest
 drwxr-xr-x 2 root root 4096 Apr 26 14:12 symtest
 - % ls -l symtest
 total 12
 lrwxrwxrwx 1 root root 6 Apr 26 14:12 link - target
 -rw-r--r-- 1 root root 0 Apr 26 14:11 target
 - % rm -f symtest/link
 rm: cannot remove `symtest/link': Permission denied
 - % sudo chown andrew symtest
 - % rm -f symtest/link   
 - % 
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4765) Permission check of symlink deletion incorrectly throws UnresolvedLinkException

2013-04-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4765:
--

Status: Patch Available  (was: Open)

 Permission check of symlink deletion incorrectly throws 
 UnresolvedLinkException
 ---

 Key: HDFS-4765
 URL: https://issues.apache.org/jira/browse/HDFS-4765
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.3-alpha, 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4765-1.patch


 With permissions enabled, the permission check in {{FSNamesystem#delete}} 
 will incorrectly throw an UnresolvedLinkException if the path contains a 
 symlink. This leads to FileContext resolving the symlink and instead deleting 
 the link target.
 The correct check is to see if the user has write permissions on the parent 
 directory of the symlink, e.g.
 {noformat}
 - % ls -ld symtest
 drwxr-xr-x 2 root root 4096 Apr 26 14:12 symtest
 - % ls -l symtest
 total 12
 lrwxrwxrwx 1 root root 6 Apr 26 14:12 link - target
 -rw-r--r-- 1 root root 0 Apr 26 14:11 target
 - % rm -f symtest/link
 rm: cannot remove `symtest/link': Permission denied
 - % sudo chown andrew symtest
 - % rm -f symtest/link   
 - % 
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4760) Update inodeMap after node replacement

2013-04-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645127#comment-13645127
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4760:
--

Should INodeReference.setReferredINode(..) also update the map?

 Update inodeMap after node replacement
 --

 Key: HDFS-4760
 URL: https://issues.apache.org/jira/browse/HDFS-4760
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-4760.001.patch, HDFS-4760.002.patch, 
 HDFS-4760.003.patch, HDFS-4760.004.patch


 Similar with HDFS-4757, we need to update the inodeMap after node 
 replacement. Because a lot of node replacement happens in the snapshot branch 
 (e.g., INodeDirectory = INodeDirectoryWithSnapshot, INodeDirectory = 
 INodeDirectorySnapshottable, INodeFile = INodeFileWithSnapshot ...), this 
 becomes a non-trivial issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4760) Update inodeMap after node replacement

2013-04-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4760:


Attachment: HDFS-4760.005.patch

Yes. setReferredINode is called by replaceSelf and replaceChild. The previous 
patch missed a scenario in replaceChild. Update the patch to fix.

 Update inodeMap after node replacement
 --

 Key: HDFS-4760
 URL: https://issues.apache.org/jira/browse/HDFS-4760
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-4760.001.patch, HDFS-4760.002.patch, 
 HDFS-4760.003.patch, HDFS-4760.004.patch, HDFS-4760.005.patch


 Similar with HDFS-4757, we need to update the inodeMap after node 
 replacement. Because a lot of node replacement happens in the snapshot branch 
 (e.g., INodeDirectory = INodeDirectoryWithSnapshot, INodeDirectory = 
 INodeDirectorySnapshottable, INodeFile = INodeFileWithSnapshot ...), this 
 becomes a non-trivial issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4751) TestLeaseRenewer#testThreadName flakes

2013-04-29 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645142#comment-13645142
 ] 

Aaron T. Myers commented on HDFS-4751:
--

Though I think you've correctly identified the issue, I'm not sure that this 
patch will definitely address it, since it looks to me like the 
{{isFilesBeingWrittenEmpty}} isn't synchronized on DFSClient. Please let me 
know if I'm missing something.

 TestLeaseRenewer#testThreadName flakes
 --

 Key: HDFS-4751
 URL: https://issues.apache.org/jira/browse/HDFS-4751
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.0.5-beta
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-4751-1.patch


 Seen internally and during upstream trunk builds, error like the following:
 {noformat}
 Error Message:
  Unfinished stubbing detected here: - at 
 org.apache.hadoop.hdfs.TestLeaseRenewer.testThreadName(TestLeaseRenewer.java:197)
   E.g. thenReturn() may be missing. Examples of correct stubbing: 
 when(mock.isOk()).thenReturn(true); 
 when(mock.isOk()).thenThrow(exception); 
 doThrow(exception).when(mock).someVoidMethod(); Hints:  1. missing 
 thenReturn()  2. although stubbed methods may return mocks, you cannot inline 
 mock creation (mock()) call inside a thenReturn method (see issue 53)
 Stack Trace:
 org.mockito.exceptions.misusing.UnfinishedStubbingException:
 Unfinished stubbing detected here:
 - at 
 org.apache.hadoop.hdfs.TestLeaseRenewer.testThreadName(TestLeaseRenewer.java:197)
 {noformat}
 I believe it's due to mocking while being concurrently accessed by another 
 thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4777) File creation with overwrite flag set to true results in logSync holding namesystem lock

2013-04-29 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4777:
--

Attachment: HDFS-4777.patch

Changes in the patch:
# I am going to make a change I was planning to do for a long time. The 
attached patch has an assert which fails when editlog sync is called with 
either FSNamesystem read or write lock held. This should help in catching any 
accidental introduction of code which calls logSync in critical sections. 
# This patch also modifies TestFileCreation to create a file with overwrite 
flag set when a file of that name already exists. This should trigger the 
logSync assert to fail.

In a subsequent patch, I am going to post a fix with which the test failure 
should not occur.


 File creation with overwrite flag set to true results in logSync holding 
 namesystem lock
 

 Key: HDFS-4777
 URL: https://issues.apache.org/jira/browse/HDFS-4777
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker
 Attachments: HDFS-4777.patch


 FSNamesystem#startFileInternal calls delete. Delete method releases the write 
 lock, making parts of startFileInternal code unintentionally executed without 
 write lock being held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4777) File creation with overwrite flag set to true results in logSync holding namesystem lock

2013-04-29 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4777:
--

Status: Patch Available  (was: Open)

 File creation with overwrite flag set to true results in logSync holding 
 namesystem lock
 

 Key: HDFS-4777
 URL: https://issues.apache.org/jira/browse/HDFS-4777
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.0-alpha, 0.23.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker
 Attachments: HDFS-4777.patch


 FSNamesystem#startFileInternal calls delete. Delete method releases the write 
 lock, making parts of startFileInternal code unintentionally executed without 
 write lock being held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4300) TransferFsImage.downloadEditsToStorage should use a tmp file for destination

2013-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645151#comment-13645151
 ] 

Hadoop QA commented on HDFS-4300:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581085/hdfs-4300-3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestFsck

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4340//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4340//console

This message is automatically generated.

 TransferFsImage.downloadEditsToStorage should use a tmp file for destination
 

 Key: HDFS-4300
 URL: https://issues.apache.org/jira/browse/HDFS-4300
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Critical
 Attachments: hdfs-4300-1.patch, hdfs-4300-2.patch, hdfs-4300-3.patch


 Currently, in TransferFsImage.downloadEditsToStorage, we download the edits 
 file directly to its finalized path. So, if the transfer fails in the middle, 
 a half-written file is left and cannot be distinguished from a correct file. 
 So, future checkpoints by the 2NN will fail, since the file is truncated in 
 the middle -- but it won't ever download a good copy because it thinks it 
 already has the proper file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4765) Permission check of symlink deletion incorrectly throws UnresolvedLinkException

2013-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645157#comment-13645157
 ] 

Hadoop QA commented on HDFS-4765:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581095/hdfs-4765-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4341//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4341//console

This message is automatically generated.

 Permission check of symlink deletion incorrectly throws 
 UnresolvedLinkException
 ---

 Key: HDFS-4765
 URL: https://issues.apache.org/jira/browse/HDFS-4765
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4765-1.patch


 With permissions enabled, the permission check in {{FSNamesystem#delete}} 
 will incorrectly throw an UnresolvedLinkException if the path contains a 
 symlink. This leads to FileContext resolving the symlink and instead deleting 
 the link target.
 The correct check is to see if the user has write permissions on the parent 
 directory of the symlink, e.g.
 {noformat}
 - % ls -ld symtest
 drwxr-xr-x 2 root root 4096 Apr 26 14:12 symtest
 - % ls -l symtest
 total 12
 lrwxrwxrwx 1 root root 6 Apr 26 14:12 link - target
 -rw-r--r-- 1 root root 0 Apr 26 14:11 target
 - % rm -f symtest/link
 rm: cannot remove `symtest/link': Permission denied
 - % sudo chown andrew symtest
 - % rm -f symtest/link   
 - % 
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4777) File creation with overwrite flag set to true results in logSync holding namesystem lock

2013-04-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645220#comment-13645220
 ] 

Hadoop QA commented on HDFS-4777:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12581106/HDFS-4777.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestParallelImageWrite
  org.apache.hadoop.hdfs.TestSafeMode
  org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
  org.apache.hadoop.hdfs.TestParallelUnixDomainRead
  org.apache.hadoop.hdfs.TestLease
  org.apache.hadoop.hdfs.TestLeaseRecovery2
  org.apache.hadoop.hdfs.TestLeaseRecovery
  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
  org.apache.hadoop.fs.TestResolveHdfsSymlink
  
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
  org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
  
org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
  
org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker
  org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
  org.apache.hadoop.hdfs.TestDFSShell
  org.apache.hadoop.hdfs.TestFileCreation
  org.apache.hadoop.hdfs.TestDFSFinalize
  org.apache.hadoop.hdfs.TestDFSStartupVersions
  
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
  org.apache.hadoop.hdfs.TestDataTransferProtocol
  org.apache.hadoop.hdfs.TestDFSClientRetries
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
  org.apache.hadoop.hdfs.TestMiniDFSCluster
  org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
  org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
  
org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
  org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
  org.apache.hadoop.hdfs.TestFetchImage
  org.apache.hadoop.hdfs.TestDFSRollback
  org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
  org.apache.hadoop.fs.TestHDFSFileContextMainOperations
  org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
  
org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
  org.apache.hadoop.cli.TestHDFSCLI
  
org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks
  org.apache.hadoop.hdfs.server.namenode.TestStartup
  org.apache.hadoop.hdfs.server.namenode.TestINodeFile
  org.apache.hadoop.fs.viewfs.TestViewFsHdfs
  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
  org.apache.hadoop.hdfs.TestHDFSFileSystemContract
  org.apache.hadoop.hdfs.security.TestDelegationToken
  
org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional
  org.apache.hadoop.hdfs.TestDFSPermission
  org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
  
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
  org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
  org.apache.hadoop.hdfs.TestParallelShortCircuitRead
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
  org.apache.hadoop.hdfs.web.TestWebHDFS
  org.apache.hadoop.hdfs.TestParallelRead
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  

[jira] [Updated] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-29 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4750:
-

Status: Patch Available  (was: Open)

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf, nfs-trunk.patch


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-29 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4750:
-

Attachment: nfs-trunk.patch

Uploaded the patch. 
Before split it into a few JIRAs, I now temporarily put the nfs implementation 
only under hdfs to make one patch. The test classes are not included.

Some subsequent JIRAs will be filed later to address security, stability and 
other issues.
   
To do some tests with current code: make sure to stop nfs service provided by 
the platform, keep rpcbind(or portmap) running,
1. start hdfs
2. start nfs gateway using hadoop nfs3. The nfs gateway has both mountd and 
nfsd. It has one export HDFS root / rw to everyone.
3. mount export to the client, using option such as -o 
soft,vers=3,proto=tcp,nolock. Make sure the users on client and server hosts 
are in synce since nfs gateway uses AUTH_SYS authentication. 


 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf, nfs-trunk.patch


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-29 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645247#comment-13645247
 ] 

Brandon Li commented on HDFS-4750:
--

@Andrew {quote}Would be happy to help with that when you think the code is 
ready. {quote}
Thanks! I only did cthon04 basic test(no symlink) with the uploaded patch on 
centos. Please feel free to give it a try.

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf, nfs-trunk.patch


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4305) Add a configurable limit on number of blocks per file, and min block size

2013-04-29 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645250#comment-13645250
 ] 

Zhijie Shen commented on HDFS-4305:
---

The patch seems to have broken M/R tests (see MAPREDUCE-5156 and 
MAPREDUCE-5157). The problem is related to the value of 
dfs.namenode.fs-limits.min-block-size.

https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3557//testReport/
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3558//testReport/

 Add a configurable limit on number of blocks per file, and min block size
 -

 Key: HDFS-4305
 URL: https://issues.apache.org/jira/browse/HDFS-4305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.0.4, 2.0.4-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch


 We recently had an issue where a user set the block size very very low and 
 managed to create a single file with hundreds of thousands of blocks. This 
 caused problems with the edit log since the OP_ADD op was so large 
 (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To 
 prevent users from making such mistakes, we should:
 - introduce a configurable minimum block size, below which requests are 
 rejected
 - introduce a configurable maximum number of blocks per file, above which 
 requests to add another block are rejected (with a suitably high default as 
 to not prevent legitimate large files)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4305) Add a configurable limit on number of blocks per file, and min block size

2013-04-29 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645264#comment-13645264
 ] 

Aaron T. Myers commented on HDFS-4305:
--

Thanks a lot for the report, Zhijie. I've filed an MR JIRA to address this 
here: MAPREDUCE-5193. I'll upload a patch shortly.

 Add a configurable limit on number of blocks per file, and min block size
 -

 Key: HDFS-4305
 URL: https://issues.apache.org/jira/browse/HDFS-4305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.0.4, 2.0.4-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch


 We recently had an issue where a user set the block size very very low and 
 managed to create a single file with hundreds of thousands of blocks. This 
 caused problems with the edit log since the OP_ADD op was so large 
 (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To 
 prevent users from making such mistakes, we should:
 - introduce a configurable minimum block size, below which requests are 
 rejected
 - introduce a configurable maximum number of blocks per file, above which 
 requests to add another block are rejected (with a suitably high default as 
 to not prevent legitimate large files)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4305) Add a configurable limit on number of blocks per file, and min block size

2013-04-29 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645267#comment-13645267
 ] 

Zhijie Shen commented on HDFS-4305:
---

No problem. Thanks for your quick response:)

 Add a configurable limit on number of blocks per file, and min block size
 -

 Key: HDFS-4305
 URL: https://issues.apache.org/jira/browse/HDFS-4305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.0.4, 2.0.4-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch


 We recently had an issue where a user set the block size very very low and 
 managed to create a single file with hundreds of thousands of blocks. This 
 caused problems with the edit log since the OP_ADD op was so large 
 (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To 
 prevent users from making such mistakes, we should:
 - introduce a configurable minimum block size, below which requests are 
 rejected
 - introduce a configurable maximum number of blocks per file, above which 
 requests to add another block are rejected (with a suitably high default as 
 to not prevent legitimate large files)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-29 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645272#comment-13645272
 ] 

Brandon Li commented on HDFS-4750:
--

@Allen{quote}Any clients besides Linux and Mac OS X? (FWIW: OS X's NFS client 
has always been a bit flaky...) Have we thought about YANFS support?{quote}
Weeks ago, did some manual tests with Window NFSv3 client before we changed the 
RPC authentication support form AUTH_NULL to AUTH_SYS. We didn't try it again 
after the change. Mapping Windows users to Unix users may be needed to test it 
again.

We looked a few other NFS implementations. Eventually we decide to implement 
one. The major reason is that, the NFS should work around a few HDFS 
limitations, and also tightly coupled with HDFS protocols. 

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf, nfs-trunk.patch


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-29 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645287#comment-13645287
 ] 

Brandon Li commented on HDFS-4750:
--

@Todd{quote}What is the purpose of putting it in Hadoop proper rather than 
proposing it as a separate project (eg in the incubator)? {quote}
What we were thinking was that, as mentioned above, the NFS gateway is tightly 
coupled with HDFS protocols. Current code is still controlled at a small size. 
Also, some code is so general (e.g., oncrpc implementation) that can be used by 
other possible projects. 

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf, nfs-trunk.patch


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-29 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645289#comment-13645289
 ] 

Brandon Li commented on HDFS-4750:
--

@Hari{quote}An alternative to consider to support NFS writes is to require 
clients do NFS mounts with directio enabled. Directio will bypass client cache 
and might alleviate some of the funky behavior.{quote}
Yes, directIO could help reduce the kernel reordered writes. Solaris supports 
it using the forcedirectio for mount. Linux seems not have a corresponding 
mount option. 

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf, nfs-trunk.patch


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4305) Add a configurable limit on number of blocks per file, and min block size

2013-04-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645296#comment-13645296
 ] 

Suresh Srinivas commented on HDFS-4305:
---

[~atm] Please mark this as incompatible change. Please also update Release note 
to describe the nature of incompatibility and how to get around it.

 Add a configurable limit on number of blocks per file, and min block size
 -

 Key: HDFS-4305
 URL: https://issues.apache.org/jira/browse/HDFS-4305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.0.4, 2.0.4-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch


 We recently had an issue where a user set the block size very very low and 
 managed to create a single file with hundreds of thousands of blocks. This 
 caused problems with the edit log since the OP_ADD op was so large 
 (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To 
 prevent users from making such mistakes, we should:
 - introduce a configurable minimum block size, below which requests are 
 rejected
 - introduce a configurable maximum number of blocks per file, above which 
 requests to add another block are rejected (with a suitably high default as 
 to not prevent legitimate large files)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4305) Add a configurable limit on number of blocks per file, and min block size

2013-04-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645297#comment-13645297
 ] 

Suresh Srinivas commented on HDFS-4305:
---

One more comment, this should be in the incompatible section in CHANGES.txt.

 Add a configurable limit on number of blocks per file, and min block size
 -

 Key: HDFS-4305
 URL: https://issues.apache.org/jira/browse/HDFS-4305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.0.4, 2.0.4-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch


 We recently had an issue where a user set the block size very very low and 
 managed to create a single file with hundreds of thousands of blocks. This 
 caused problems with the edit log since the OP_ADD op was so large 
 (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To 
 prevent users from making such mistakes, we should:
 - introduce a configurable minimum block size, below which requests are 
 rejected
 - introduce a configurable maximum number of blocks per file, above which 
 requests to add another block are rejected (with a suitably high default as 
 to not prevent legitimate large files)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira