[jira] Commented: (HDFS-1024) SecondaryNamenode fails to checkpoint because namenode fails with CancelledKeyException

2010-04-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853772#action_12853772
 ] 

stack commented on HDFS-1024:
-

I ran vote up on hdfs-dev as per Dhruba suggestion: 
http://www.mail-archive.com/hdfs-...@hadoop.apache.org/msg00930.html.  Vote 
passed with +7 votes.  I just committed HDFS-1024.patch.1-0.20.txt to 
branch-0.20.

 SecondaryNamenode fails to checkpoint because namenode fails with 
 CancelledKeyException
 ---

 Key: HDFS-1024
 URL: https://issues.apache.org/jira/browse/HDFS-1024
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.22.0
Reporter: dhruba borthakur
Assignee: Dmytro Molkov
Priority: Blocker
 Fix For: 0.22.0

 Attachments: HDFS-1024.patch, HDFS-1024.patch.1, 
 HDFS-1024.patch.1-0.20.txt


 The secondary namenode fails to retrieve the entire fsimage from the 
 Namenode. It fetches a part of the fsimage but believes that it has fetched 
 the entire fsimage file and proceeds ahead with the checkpointing. Stack 
 traces will be attached below.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-955) FSImage.saveFSImage can lose edits

2010-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853774#action_12853774
 ] 

Hadoop QA commented on HDFS-955:


+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12440629/saveNamespace.patch
  against trunk revision 930967.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 11 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/148/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/148/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/148/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/148/console

This message is automatically generated.

 FSImage.saveFSImage can lose edits
 --

 Key: HDFS-955
 URL: https://issues.apache.org/jira/browse/HDFS-955
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1, 0.21.0, 0.22.0
Reporter: Todd Lipcon
Assignee: Konstantin Shvachko
Priority: Blocker
 Attachments: FSStateTransition7.htm, hdfs-955-moretests.txt, 
 hdfs-955-unittest.txt, PurgeEditsBeforeImageSave.patch, 
 saveNamespace-0.20.patch, saveNamespace-0.21.patch, saveNamespace.patch, 
 saveNamespace.patch, saveNamespace.patch, saveNamespace.patch, 
 saveNamespace.txt


 This is a continuation of a discussion from HDFS-909. The FSImage.saveFSImage 
 function (implementing dfsadmin -saveNamespace) can corrupt the NN storage 
 such that all current edits are lost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-997) DataNode local directories should have narrow permissions

2010-04-06 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-997:
---

Attachment: H997-5.patch

Revert this change:
{noformat}
-  File data = new File(dirURI.getPath());
+  Path dir = new Path(dirURI);
{noformat}
Datanode dirs may have an authority for URIs with {{file}} schemes, explicitly 
dropped in the existing code. To preserve backwards compatibility, this must 
remain.

 DataNode local directories should have narrow permissions
 -

 Key: HDFS-997
 URL: https://issues.apache.org/jira/browse/HDFS-997
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.22.0
Reporter: Arun C Murthy
Assignee: Luke Lu
 Fix For: 0.22.0

 Attachments: H997-4.patch, H997-5.patch, hdfs-997-trunk-v1.patch, 
 hdfs-997-trunk-v2.patch, hdfs-997-trunk-v3.patch


 DataNode's local directories (blocks et al) should have narrow 700 
 permissions for security

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-997) DataNode local directories should have narrow permissions

2010-04-06 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-997:
---

Status: Patch Available  (was: Open)

 DataNode local directories should have narrow permissions
 -

 Key: HDFS-997
 URL: https://issues.apache.org/jira/browse/HDFS-997
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.22.0
Reporter: Arun C Murthy
Assignee: Luke Lu
 Fix For: 0.22.0

 Attachments: H997-4.patch, H997-5.patch, hdfs-997-trunk-v1.patch, 
 hdfs-997-trunk-v2.patch, hdfs-997-trunk-v3.patch


 DataNode's local directories (blocks et al) should have narrow 700 
 permissions for security

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-997) DataNode local directories should have narrow permissions

2010-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853789#action_12853789
 ] 

Hadoop QA commented on HDFS-997:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12440825/H997-4.patch
  against trunk revision 930967.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 9 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/301/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/301/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/301/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/301/console

This message is automatically generated.

 DataNode local directories should have narrow permissions
 -

 Key: HDFS-997
 URL: https://issues.apache.org/jira/browse/HDFS-997
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.22.0
Reporter: Arun C Murthy
Assignee: Luke Lu
 Fix For: 0.22.0

 Attachments: H997-4.patch, H997-5.patch, hdfs-997-trunk-v1.patch, 
 hdfs-997-trunk-v2.patch, hdfs-997-trunk-v3.patch


 DataNode's local directories (blocks et al) should have narrow 700 
 permissions for security

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-997) DataNode local directories should have narrow permissions

2010-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853848#action_12853848
 ] 

Hadoop QA commented on HDFS-997:


+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12440852/H997-5.patch
  against trunk revision 930967.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 9 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/console

This message is automatically generated.

 DataNode local directories should have narrow permissions
 -

 Key: HDFS-997
 URL: https://issues.apache.org/jira/browse/HDFS-997
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.22.0
Reporter: Arun C Murthy
Assignee: Luke Lu
 Fix For: 0.22.0

 Attachments: H997-4.patch, H997-5.patch, hdfs-997-trunk-v1.patch, 
 hdfs-997-trunk-v2.patch, hdfs-997-trunk-v3.patch


 DataNode's local directories (blocks et al) should have narrow 700 
 permissions for security

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-857) Incorrect type for fuse-dfs capacity can cause df to return negative values on 32-bit machines

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853853#action_12853853
 ] 

Hudson commented on HDFS-857:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Incorrect type for fuse-dfs capacity can cause df to return negative values 
 on 32-bit machines
 

 Key: HDFS-857
 URL: https://issues.apache.org/jira/browse/HDFS-857
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Reporter: Brian Bockelman
Assignee: Brian Bockelman
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-857.patch


 On sufficiently large HDFS installs, the casting of hdfsGetCapacity to a long 
 may cause df to return negative values.  tOffset should be used instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-850) Display more memory details on the web ui

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853850#action_12853850
 ] 

Hudson commented on HDFS-850:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Display more memory details on the web ui
 -

 Key: HDFS-850
 URL: https://issues.apache.org/jira/browse/HDFS-850
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.22.0
Reporter: Dmytro Molkov
Assignee: Dmytro Molkov
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-850.patch, HDFS-850.patch, HDFS-850.patch, 
 screenshot-1.jpg, screenshot-2.jpg


 With the HDFS-94 being commited, the namenode will use JMX memory beans to 
 get information about heap usage.
 They provide us with additional information such as NonHeap memory usage and 
 Heap Commited and Initialized memory in addition to Used and Max.
 It will be useful to see that additional information on the NameNode web ui.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-858) Incorrect return codes for fuse-dfs

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853852#action_12853852
 ] 

Hudson commented on HDFS-858:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Incorrect return codes for fuse-dfs
 ---

 Key: HDFS-858
 URL: https://issues.apache.org/jira/browse/HDFS-858
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Reporter: Brian Bockelman
Assignee: Brian Bockelman
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-858-2.patch, HDFS-858.patch


 fuse-dfs doesn't pass proper error codes from libhdfs; places I'd like to 
 correct are hdfsFileOpen (which can result in permission denied or quota 
 violations) and hdfsWrite (which can result in quota violations).
 By returning the correct error codes, command line utilities return much 
 better error messages - especially for quota violations, which can be a devil 
 to debug.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-729) fsck option to list only corrupted files

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853851#action_12853851
 ] 

Hudson commented on HDFS-729:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 fsck option to list only corrupted files
 

 Key: HDFS-729
 URL: https://issues.apache.org/jira/browse/HDFS-729
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: dhruba borthakur
Assignee: Rodrigo Schmidt
 Attachments: badFiles.txt, badFiles2.txt, corruptFiles.txt, 
 HDFS-729.1.patch, HDFS-729.2.patch, HDFS-729.3.patch, HDFS-729.4.patch, 
 HDFS-729.5.patch, HDFS-729.6.patch


 An option to fsck to list only corrupted files will be very helpful for 
 frequent monitoring.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1016) HDFS side change for HADOOP-6569

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853855#action_12853855
 ] 

Hudson commented on HDFS-1016:
--

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 HDFS side change for HADOOP-6569
 

 Key: HDFS-1016
 URL: https://issues.apache.org/jira/browse/HDFS-1016
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: optimizeCat_HDFS.patch, optimizeCat_HDFS1.patch, 
 optimizeCat_HDFS2.patch


 1. TestCLI configuration file should be modified.
 2. DistributedFileSystem#open change the exception to be 
 FileNotFoundException when the file does not exist.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-854) Datanode should scan devices in parallel to generate block report

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853857#action_12853857
 ] 

Hudson commented on HDFS-854:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Datanode should scan devices in parallel to generate block report
 -

 Key: HDFS-854
 URL: https://issues.apache.org/jira/browse/HDFS-854
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.22.0
Reporter: dhruba borthakur
Assignee: Dmytro Molkov
 Fix For: 0.22.0

 Attachments: HDFS-854-2.patch, HDFS-854.patch, HDFS-854.patch.1


 A Datanode should scan its disk devices in parallel so that the time to 
 generate a block report is reduced. This will reduce the startup time of a 
 cluster.
 A datanode has 12 disk (each of 1 TB) to store HDFS blocks. There is a total 
 of 150K blocks on these 12 disks. It takes the datanode upto 20 minutes to 
 scan these devices to generate the first block report.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-913) TestRename won't run automatically from 'run-test-hdfs-faul-inject' target

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853858#action_12853858
 ] 

Hudson commented on HDFS-913:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 TestRename won't run automatically from 'run-test-hdfs-faul-inject' target
 --

 Key: HDFS-913
 URL: https://issues.apache.org/jira/browse/HDFS-913
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Konstantin Boudnik
Assignee: Suresh Srinivas
 Fix For: 0.22.0

 Attachments: hdfs-913.patch


 Fault injection test classes suppose to have {{TestFi}} prefix. Otherwise, 
 JUnit target won't pick them up as a part of the batch test run. Although, 
 it's still possible to run test with different names with {{-Dtestcase=}} 
 directive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1015) Intermittent failure in TestSecurityTokenEditLog

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853862#action_12853862
 ] 

Hudson commented on HDFS-1015:
--

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Intermittent failure in TestSecurityTokenEditLog
 

 Key: HDFS-1015
 URL: https://issues.apache.org/jira/browse/HDFS-1015
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node, test
Affects Versions: 0.22.0
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Fix For: 0.22.0

 Attachments: HDFS-1015-y20.1.patch, HDFS-1015.1.patch, 
 HDFS-1015.2.patch


 This test fails sometimes in hadoop-0.20.100-secondary build. It doesn't fail 
 in trunk or  hadoop-0.20.100 build.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-856) Hardcoded replication level for new files in fuse-dfs

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853854#action_12853854
 ] 

Hudson commented on HDFS-856:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Hardcoded replication level for new files in fuse-dfs
 -

 Key: HDFS-856
 URL: https://issues.apache.org/jira/browse/HDFS-856
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Reporter: Brian Bockelman
Assignee: Brian Bockelman
Priority: Minor
 Fix For: 0.22.0

 Attachments: HADOOP-856.patch


 In fuse-dfs, the number of replicas is always hardcoded to 3 in the arguments 
 to hdfsOpenFile.  We should use the setting in the hadoop configuration 
 instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-245) Create symbolic links in HDFS

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853856#action_12853856
 ] 

Hudson commented on HDFS-245:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Create symbolic links in HDFS
 -

 Key: HDFS-245
 URL: https://issues.apache.org/jira/browse/HDFS-245
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: dhruba borthakur
Assignee: Eli Collins
 Fix For: 0.22.0

 Attachments: 4044_20081030spi.java, design-doc-v4.txt, 
 designdocv1.txt, designdocv2.txt, designdocv3.txt, 
 HADOOP-4044-strawman.patch, symlink-0.20.0.patch, symlink-25-hdfs.patch, 
 symlink-26-hdfs.patch, symlink-26-hdfs.patch, symLink1.patch, symLink1.patch, 
 symLink11.patch, symLink12.patch, symLink13.patch, symLink14.patch, 
 symLink15.txt, symLink15.txt, symlink16-common.patch, symlink16-hdfs.patch, 
 symlink16-mr.patch, symlink17-common.txt, symlink17-hdfs.txt, 
 symlink18-common.txt, symlink19-common-delta.patch, symlink19-common.txt, 
 symlink19-common.txt, symlink19-hdfs-delta.patch, symlink19-hdfs.txt, 
 symlink20-common.patch, symlink20-hdfs.patch, symlink21-common.patch, 
 symlink21-hdfs.patch, symlink22-common.patch, symlink22-hdfs.patch, 
 symlink23-common.patch, symlink23-hdfs.patch, symlink24-hdfs.patch, 
 symlink27-hdfs.patch, symlink28-hdfs.patch, symlink29-hdfs.patch, 
 symlink29-hdfs.patch, symlink30-hdfs.patch, symlink31-hdfs.patch, 
 symlink33-hdfs.patch, symlink35-hdfs.patch, symlink36-hdfs.patch, 
 symlink37-hdfs.patch, symlink38-hdfs.patch, symlink39-hdfs.patch, 
 symLink4.patch, symlink40-hdfs.patch, symlink41-hdfs.patch, symLink5.patch, 
 symLink6.patch, symLink8.patch, symLink9.patch


 HDFS should support symbolic links. A symbolic link is a special type of file 
 that contains a reference to another file or directory in the form of an 
 absolute or relative path and that affects pathname resolution. Programs 
 which read or write to files named by a symbolic link will behave as if 
 operating directly on the target file. However, archiving utilities can 
 handle symbolic links specially and manipulate them directly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-939) libhdfs test is broken

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853863#action_12853863
 ] 

Hudson commented on HDFS-939:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 libhdfs test is broken
 --

 Key: HDFS-939
 URL: https://issues.apache.org/jira/browse/HDFS-939
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/libhdfs
Affects Versions: 0.22.0
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Blocker
 Fix For: 0.22.0

 Attachments: hdfs-939-1.patch


 The libhdfs test currently does not run because hadoop.tmp.dir is specified 
 as a relative path, and it looks like a side-effect of HDFS-873 was that 
 relative paths get made absolute, so build/test/libhdfs gets turned into 
 /test/libhdfs, which the NN can not create. Let's make the test generate conf 
 files that use the appropriate directory (build/test/libhdfs) specified by 
 fully qualified URIs. 
 Also, are relative paths in conf files supported? If not rather than fail we 
 should detect this and print a warning.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1032) Extend DFSck with an option to list corrupt files using API from HDFS-729

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853861#action_12853861
 ] 

Hudson commented on HDFS-1032:
--

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Extend DFSck with an option to list corrupt files using API from HDFS-729
 -

 Key: HDFS-1032
 URL: https://issues.apache.org/jira/browse/HDFS-1032
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Rodrigo Schmidt
Assignee: André Oriani
 Attachments: hdfs-1032_aoriani.patch, hdfs-1032_aoriani_2.patch, 
 hdfs-1032_aoriani_3.patch, hdfs-1032_aoriani_4.patch


 HDFS-729 created a new API to namenode that returns the list of corrupt files.
 We can now extend fsck (DFSck.java) to add an option (e.g. --list_corrupt) 
 that queries the namenode using the new API and lists the corrupt blocks to 
 the users.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1074) TestProxyUtil fails

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853860#action_12853860
 ] 

Hudson commented on HDFS-1074:
--

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 TestProxyUtil fails
 ---

 Key: HDFS-1074
 URL: https://issues.apache.org/jira/browse/HDFS-1074
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-1074.patch


 TestProxyUtil failed a few Hudson builds, including 
 [#289|http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/289/testReport/org.apache.hadoop.hdfsproxy/TestProxyUtil/testSendCommand/],
  
 [#287|http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/289/testReport/org.apache.hadoop.hdfsproxy/TestProxyUtil/testSendCommand/],
  etc.
 {noformat}
 junit.framework.AssertionFailedError: null
   at 
 org.apache.hadoop.hdfsproxy.TestProxyUtil.testSendCommand(TestProxyUtil.java:43)
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-859) fuse-dfs utime behavior causes issues with tar

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853864#action_12853864
 ] 

Hudson commented on HDFS-859:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 fuse-dfs utime behavior causes issues with tar
 --

 Key: HDFS-859
 URL: https://issues.apache.org/jira/browse/HDFS-859
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Reporter: Brian Bockelman
Assignee: Brian Bockelman
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-859-2.patch, HDFS-859.patch


 When trying to untar files onto fuse-dfs, tar will try to set the utime on 
 all the files and directories.  However, setting the utime on a directory in 
 libhdfs causes an error.
 We should silently ignore the failure of setting a utime on a directory; this 
 will allow tar to complete successfully.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-520) Create new tests for block recovery

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853867#action_12853867
 ] 

Hudson commented on HDFS-520:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Create new tests for block recovery
 ---

 Key: HDFS-520
 URL: https://issues.apache.org/jira/browse/HDFS-520
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Konstantin Boudnik
Assignee: Hairong Kuang
 Fix For: 0.21.0, 0.22.0

 Attachments: blockRecoveryPositive.patch, 
 blockRecoveryPositive1.patch, blockRecoveryPositive2.patch, 
 blockRecoveryPositive2_0.21.patch, 
 TEST-org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.txt


 According to the test plan a number of new features are going to be 
 implemented as a part of this umbrella (HDFS-265) JIRA.
 These new features are have to be tested properly. Block recovery is one of 
 new functionality which require new tests to be developed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1043) Benchmark overhead of server-side group resolution of users

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853878#action_12853878
 ] 

Hudson commented on HDFS-1043:
--

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Benchmark overhead of server-side group resolution of users
 ---

 Key: HDFS-1043
 URL: https://issues.apache.org/jira/browse/HDFS-1043
 Project: Hadoop HDFS
  Issue Type: Test
  Components: benchmarks
Affects Versions: 0.22.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 0.22.0

 Attachments: UGCRefresh.patch, UGCRefresh.patch


 Server-side user group resolution was introduced in HADOOP-4656. 
 The benchmark should repeatedly request the name-node for user group 
 resolution, and reset NN's user group cache periodically.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-946) NameNode should not return full path name when lisitng a diretory or getting the status of a file

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853868#action_12853868
 ] 

Hudson commented on HDFS-946:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 NameNode should not return full path name when lisitng a diretory or getting 
 the status of a file
 -

 Key: HDFS-946
 URL: https://issues.apache.org/jira/browse/HDFS-946
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: HdfsFileStatus-yahoo20.patch, HDFSFileStatus.patch, 
 HDFSFileStatus1.patch, HdfsFileStatus3.patch, HdfsFileStatus4.patch, 
 HdfsFileStatusProxy-Yahoo20.patch


 FSDirectory#getListring(String src) has the following code:
   int i = 0;
   for (INode cur : contents) {
 listing[i] = createFileStatus(srcs+cur.getLocalName(), cur);
 i++;
   }
 So listing a directory will return an array of FileStatus. Each FileStatus 
 element has the full path name. This increases the return message size and 
 adds non-negligible CPU time to the operation.
 FSDirectory#getFileInfo(String) does not need to return the file name either.
 Another optimization is that in the version of FileStatus that's used in the 
 wire protocol, the field path does not need to be Path; It could be a String 
 or a byte array ideally. This could avoid unnecessary creation of the Path 
 objects at NameNode, thus help reduce the GC problem observed when a large 
 number of getFileInfo or getListing operations hit NameNode.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1024) SecondaryNamenode fails to checkpoint because namenode fails with CancelledKeyException

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853877#action_12853877
 ] 

Hudson commented on HDFS-1024:
--

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 SecondaryNamenode fails to checkpoint because namenode fails with 
 CancelledKeyException
 ---

 Key: HDFS-1024
 URL: https://issues.apache.org/jira/browse/HDFS-1024
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.22.0
Reporter: dhruba borthakur
Assignee: Dmytro Molkov
Priority: Blocker
 Fix For: 0.22.0

 Attachments: HDFS-1024.patch, HDFS-1024.patch.1, 
 HDFS-1024.patch.1-0.20.txt


 The secondary namenode fails to retrieve the entire fsimage from the 
 Namenode. It fetches a part of the fsimage but believes that it has fetched 
 the entire fsimage file and proceeds ahead with the checkpointing. Stack 
 traces will be attached below.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-991) Allow browsing the filesystem over http using delegation tokens

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853865#action_12853865
 ] 

Hudson commented on HDFS-991:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Allow browsing the filesystem over http using delegation tokens
 ---

 Key: HDFS-991
 URL: https://issues.apache.org/jira/browse/HDFS-991
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.22.0

 Attachments: h-991.patch, h-991.patch, h-991.patch, h-991.patch, 
 h-991.patch


 Assuming the user authenticates to the NameNode in the browser, allow them to 
 browse the file system by adding a delegation token the the url when it is 
 redirected to a datanode.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-998) The servlets should quote server generated strings sent in the response

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853871#action_12853871
 ] 

Hudson commented on HDFS-998:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 The servlets should quote server generated strings sent in the response
 ---

 Key: HDFS-998
 URL: https://issues.apache.org/jira/browse/HDFS-998
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.22.0
Reporter: Devaraj Das
Assignee: Chris Douglas
 Fix For: 0.22.0

 Attachments: H998-0y20.patch, H998-1.patch, hdfs-998-trunk-v1.patch


 This is the HDFS equivalent of MAPREDUCE-1454.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-961) dfs_readdir incorrectly parses paths

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853872#action_12853872
 ] 

Hudson commented on HDFS-961:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 dfs_readdir incorrectly parses paths
 

 Key: HDFS-961
 URL: https://issues.apache.org/jira/browse/HDFS-961
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 0.22.0

 Attachments: hdfs-961-1.patch, hdfs-961-2.patch


 fuse-dfs dfs_readdir assumes that DistributedFileSystem#listStatus returns 
 Paths with the same scheme/authority as the dfs.name.dir used to connect. If 
 NameNode.DEFAULT_PORT port is used listStatus returns Paths that have 
 authorities without the port (see HDFS-960), which breaks the following code. 
 {code}
 // hack city: todo fix the below to something nicer and more maintainable but
 // with good performance
 // strip off the path but be careful if the path is solely '/'
 // NOTE - this API started returning filenames as full dfs uris
 const char *const str = info[i].mName + dfs-dfs_uri_len + path_len + 
 ((path_len == 1  *path == '/') ? 0 : 1);
 {code}
 Let's make the path parsing here more robust. listStatus returns normalized 
 paths so we can find the start of the path by searching for the 3rd slash. A 
 more long term solution is to have hdfsFileInfo maintain a path object or at 
 least pointers to the relevant URI components.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-999) Secondary namenode should login using kerberos if security is configured

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853873#action_12853873
 ] 

Hudson commented on HDFS-999:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Secondary namenode should login using kerberos if security is configured
 

 Key: HDFS-999
 URL: https://issues.apache.org/jira/browse/HDFS-999
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Attachments: HDFS-999-BP20.patch, HDFS-999.patch


 Right now, if NameNode is configured to use Kerberos, SecondaryNameNode will 
 fail to start.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-861) fuse-dfs does not support O_RDWR

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853866#action_12853866
 ] 

Hudson commented on HDFS-861:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 fuse-dfs does not support O_RDWR
 

 Key: HDFS-861
 URL: https://issues.apache.org/jira/browse/HDFS-861
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Reporter: Brian Bockelman
Assignee: Brian Bockelman
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-861.patch


 Some applications (for us, the big one is rsync) will open a file in 
 read-write mode when it really only intends to read xor write (not both).  
 fuse-dfs should try to not fail until the application actually tries to write 
 to a pre-existing file or read from a newly created file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-984) Delegation Tokens should be persisted in Namenode

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853875#action_12853875
 ] 

Hudson commented on HDFS-984:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Delegation Tokens should be persisted in Namenode
 -

 Key: HDFS-984
 URL: https://issues.apache.org/jira/browse/HDFS-984
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Fix For: 0.22.0

 Attachments: HDFS-984-0_20.4.patch, HDFS-984.10.patch, 
 HDFS-984.11.patch, HDFS-984.12.patch, HDFS-984.14.patch, HDFS-984.7.patch


 The Delegation tokens should be persisted in the FsImage and EditLogs so that 
 they are valid to be used after namenode shutdown and restart.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1067) Create block recovery tests that handle errors

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853870#action_12853870
 ] 

Hudson commented on HDFS-1067:
--

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Create block recovery tests that handle errors
 --

 Key: HDFS-1067
 URL: https://issues.apache.org/jira/browse/HDFS-1067
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.21.0, 0.22.0

 Attachments: blockRecoveryNegativeTests.patch


 This jira is for implementing block recovery tests described in the append 
 test plan section 7: Fault injection tests for block recovery. For all 11 
 test cases, BlockRecoveryFI_01 and 02 have already been implemented in 
 TestInterDatanodeProtocol.java. BlockRecoveryFI_08 has been implemented in 
 HDFS-520. This jira is to implement the rest of the test cases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-985) HDFS should issue multiple RPCs for listing a large directory

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853880#action_12853880
 ] 

Hudson commented on HDFS-985:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 HDFS should issue multiple RPCs for listing a large directory
 -

 Key: HDFS-985
 URL: https://issues.apache.org/jira/browse/HDFS-985
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: directoryBrowse_0.20yahoo.patch, 
 directoryBrowse_0.20yahoo_1.patch, directoryBrowse_0.20yahoo_2.patch, 
 iterativeLS_trunk.patch, iterativeLS_trunk1.patch, iterativeLS_trunk2.patch, 
 iterativeLS_trunk3.patch, iterativeLS_trunk3.patch, iterativeLS_trunk4.patch, 
 iterativeLS_yahoo.patch, iterativeLS_yahoo1.patch, testFileStatus.patch


 Currently HDFS issues one RPC from the client to the NameNode for listing a 
 directory. However some directories are large that contain thousands or 
 millions of items. Listing such large directories in one RPC has a few 
 shortcomings:
 1. The list operation holds the global fsnamesystem lock for a long time thus 
 blocking other requests. If a large number (like thousands) of such list 
 requests hit NameNode in a short period of time, NameNode will be 
 significantly slowed down. Users end up noticing longer response time or lost 
 connections to NameNode.
 2. The response message is uncontrollable big. We observed a response as big 
 as 50M bytes when listing a directory of 300 thousand items. Even with the 
 optimization introduced at HDFS-946 that may be able to cut the response by 
 20-50%, the response size will still in the magnitude of 10 mega bytes.
 I propose to implement a directory listing using multiple RPCs. Here is the 
 plan:
 1. Each getListing RPC has an upper limit on the number of items returned.  
 This limit could be configurable, but I am thinking to set it to be a fixed 
 number like 500.
 2. Each RPC additionally specifies a start position for this listing request. 
 I am thinking to use the last item of the previous listing RPC as an 
 indicator. Since NameNode stores all items in a directory as a sorted array, 
 NameNode uses the last item to locate the start item of this listing even if 
 the last item is deleted in between these two consecutive calls. This has the 
 advantage of avoid duplicate entries at the client side.
 3. The return value additionally specifies if the whole directory is done 
 listing. If the client sees a false flag, it will continue to issue another 
 RPC.
 This proposal will change the semantics of large directory listing in a sense 
 that listing is no longer an atomic operation if a directory's content is 
 changing while the listing operation is in progress.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853876#action_12853876
 ] 

Hudson commented on HDFS-994:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: HDFS-994-0_20.1.patch, HDFS-994-2.patch, 
 HDFS-994-3.patch, HDFS-994-4.patch, HDFS-994-5.patch, HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-826) Allow a mechanism for an application to detect that datanode(s) have died in the write pipeline

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853869#action_12853869
 ] 

Hudson commented on HDFS-826:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 Allow a mechanism for an application to detect that datanode(s)  have died in 
 the write pipeline
 

 Key: HDFS-826
 URL: https://issues.apache.org/jira/browse/HDFS-826
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Fix For: 0.22.0

 Attachments: HDFS-826-0.20-v2.patch, HDFS-826-0.20.patch, 
 Replicable4.txt, ReplicableHdfs.txt, ReplicableHdfs2.txt, ReplicableHdfs3.txt


 HDFS does not replicate the last block of the file that is being currently 
 written to by an application. Every datanode death in the write pipeline 
 decreases the reliability of the last block of the currently-being-written 
 block. This situation can be improved if the application can be notified of a 
 datanode death in the write pipeline. Then, the application can decide what 
 is the right course of action to be taken on this event.
 In our use-case, the application can close the file on the first datanode 
 death, and start writing to a newly created file. This ensures that the 
 reliability guarantee of a block is close to 3 at all time.
 One idea is to make DFSOutoutStream. write() throw an exception if the number 
 of datanodes in the write pipeline fall below minimum.replication.factor that 
 is set on the client (this is backward compatible).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-968) s/StringBuffer/StringBuilder - as necessary

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853879#action_12853879
 ] 

Hudson commented on HDFS-968:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #302 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/302/])


 s/StringBuffer/StringBuilder - as necessary
 ---

 Key: HDFS-968
 URL: https://issues.apache.org/jira/browse/HDFS-968
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kay Kay
Assignee: Kay Kay
 Fix For: 0.22.0

 Attachments: HDFS-968.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-935) Real user in delegation token.

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853937#action_12853937
 ] 

Hudson commented on HDFS-935:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Real user in delegation token.
 --

 Key: HDFS-935
 URL: https://issues.apache.org/jira/browse/HDFS-935
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Fix For: 0.22.0

 Attachments: HDFS-935.3.patch, HDFS-935.5.patch, HDFS-935.6.patch, 
 HDFS-935.7.patch


 Delegation Token should also contain the real user which got it issues in 
 behalf of an effective user.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-858) Incorrect return codes for fuse-dfs

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853936#action_12853936
 ] 

Hudson commented on HDFS-858:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Incorrect return codes for fuse-dfs
 ---

 Key: HDFS-858
 URL: https://issues.apache.org/jira/browse/HDFS-858
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Reporter: Brian Bockelman
Assignee: Brian Bockelman
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-858-2.patch, HDFS-858.patch


 fuse-dfs doesn't pass proper error codes from libhdfs; places I'd like to 
 correct are hdfsFileOpen (which can result in permission denied or quota 
 violations) and hdfsWrite (which can result in quota violations).
 By returning the correct error codes, command line utilities return much 
 better error messages - especially for quota violations, which can be a devil 
 to debug.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-857) Incorrect type for fuse-dfs capacity can cause df to return negative values on 32-bit machines

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853938#action_12853938
 ] 

Hudson commented on HDFS-857:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Incorrect type for fuse-dfs capacity can cause df to return negative values 
 on 32-bit machines
 

 Key: HDFS-857
 URL: https://issues.apache.org/jira/browse/HDFS-857
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Reporter: Brian Bockelman
Assignee: Brian Bockelman
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-857.patch


 On sufficiently large HDFS installs, the casting of hdfsGetCapacity to a long 
 may cause df to return negative values.  tOffset should be used instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1016) HDFS side change for HADOOP-6569

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853939#action_12853939
 ] 

Hudson commented on HDFS-1016:
--

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 HDFS side change for HADOOP-6569
 

 Key: HDFS-1016
 URL: https://issues.apache.org/jira/browse/HDFS-1016
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: optimizeCat_HDFS.patch, optimizeCat_HDFS1.patch, 
 optimizeCat_HDFS2.patch


 1. TestCLI configuration file should be modified.
 2. DistributedFileSystem#open change the exception to be 
 FileNotFoundException when the file does not exist.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-913) TestRename won't run automatically from 'run-test-hdfs-faul-inject' target

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853945#action_12853945
 ] 

Hudson commented on HDFS-913:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 TestRename won't run automatically from 'run-test-hdfs-faul-inject' target
 --

 Key: HDFS-913
 URL: https://issues.apache.org/jira/browse/HDFS-913
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Konstantin Boudnik
Assignee: Suresh Srinivas
 Fix For: 0.22.0

 Attachments: hdfs-913.patch


 Fault injection test classes suppose to have {{TestFi}} prefix. Otherwise, 
 JUnit target won't pick them up as a part of the batch test run. Although, 
 it's still possible to run test with different names with {{-Dtestcase=}} 
 directive.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-245) Create symbolic links in HDFS

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853943#action_12853943
 ] 

Hudson commented on HDFS-245:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Create symbolic links in HDFS
 -

 Key: HDFS-245
 URL: https://issues.apache.org/jira/browse/HDFS-245
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: dhruba borthakur
Assignee: Eli Collins
 Fix For: 0.22.0

 Attachments: 4044_20081030spi.java, design-doc-v4.txt, 
 designdocv1.txt, designdocv2.txt, designdocv3.txt, 
 HADOOP-4044-strawman.patch, symlink-0.20.0.patch, symlink-25-hdfs.patch, 
 symlink-26-hdfs.patch, symlink-26-hdfs.patch, symLink1.patch, symLink1.patch, 
 symLink11.patch, symLink12.patch, symLink13.patch, symLink14.patch, 
 symLink15.txt, symLink15.txt, symlink16-common.patch, symlink16-hdfs.patch, 
 symlink16-mr.patch, symlink17-common.txt, symlink17-hdfs.txt, 
 symlink18-common.txt, symlink19-common-delta.patch, symlink19-common.txt, 
 symlink19-common.txt, symlink19-hdfs-delta.patch, symlink19-hdfs.txt, 
 symlink20-common.patch, symlink20-hdfs.patch, symlink21-common.patch, 
 symlink21-hdfs.patch, symlink22-common.patch, symlink22-hdfs.patch, 
 symlink23-common.patch, symlink23-hdfs.patch, symlink24-hdfs.patch, 
 symlink27-hdfs.patch, symlink28-hdfs.patch, symlink29-hdfs.patch, 
 symlink29-hdfs.patch, symlink30-hdfs.patch, symlink31-hdfs.patch, 
 symlink33-hdfs.patch, symlink35-hdfs.patch, symlink36-hdfs.patch, 
 symlink37-hdfs.patch, symlink38-hdfs.patch, symlink39-hdfs.patch, 
 symLink4.patch, symlink40-hdfs.patch, symlink41-hdfs.patch, symLink5.patch, 
 symLink6.patch, symLink8.patch, symLink9.patch


 HDFS should support symbolic links. A symbolic link is a special type of file 
 that contains a reference to another file or directory in the form of an 
 absolute or relative path and that affects pathname resolution. Programs 
 which read or write to files named by a symbolic link will behave as if 
 operating directly on the target file. However, archiving utilities can 
 handle symbolic links specially and manipulate them directly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-856) Hardcoded replication level for new files in fuse-dfs

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853940#action_12853940
 ] 

Hudson commented on HDFS-856:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Hardcoded replication level for new files in fuse-dfs
 -

 Key: HDFS-856
 URL: https://issues.apache.org/jira/browse/HDFS-856
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Reporter: Brian Bockelman
Assignee: Brian Bockelman
Priority: Minor
 Fix For: 0.22.0

 Attachments: HADOOP-856.patch


 In fuse-dfs, the number of replicas is always hardcoded to 3 in the arguments 
 to hdfsOpenFile.  We should use the setting in the hadoop configuration 
 instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-933) Add createIdentifier() implementation to DelegationTokenSecretManager

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853941#action_12853941
 ] 

Hudson commented on HDFS-933:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Add createIdentifier() implementation to DelegationTokenSecretManager
 -

 Key: HDFS-933
 URL: https://issues.apache.org/jira/browse/HDFS-933
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang
 Fix For: 0.22.0

 Attachments: h6419-01.patch, h6419-07.patch, h6419-09.patch


 abstract method createIdentifier() is being added in Common (HADOOP-6419) and 
 needs to be implemented by DelegationTokenSecretManager. This allows the RPC 
 Server's authentication layer to deserialize received TokenIdentifiers.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-930) o.a.h.hdfs.server.datanode.DataXceiver - run() - Version mismatch exception - more context to help debugging

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853944#action_12853944
 ] 

Hudson commented on HDFS-930:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 o.a.h.hdfs.server.datanode.DataXceiver - run() - Version mismatch exception - 
 more context to help debugging
 

 Key: HDFS-930
 URL: https://issues.apache.org/jira/browse/HDFS-930
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.22.0
Reporter: Kay Kay
Assignee: Kay Kay
Priority: Minor
 Fix For: 0.22.0

 Attachments: HADOOP-6519.patch, HDFS-930.patch, HDFS-930.patch


 add some context information in the IOException during a version mismatch to 
 help debugging. 
 (Applicable on the 0.20.x branch ) 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1074) TestProxyUtil fails

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853947#action_12853947
 ] 

Hudson commented on HDFS-1074:
--

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])
. hdfsproxy: Fix bugs in TestProxyUtil.  Contributed by Srikanth Sundarrajan


 TestProxyUtil fails
 ---

 Key: HDFS-1074
 URL: https://issues.apache.org/jira/browse/HDFS-1074
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-1074.patch


 TestProxyUtil failed a few Hudson builds, including 
 [#289|http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/289/testReport/org.apache.hadoop.hdfsproxy/TestProxyUtil/testSendCommand/],
  
 [#287|http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/289/testReport/org.apache.hadoop.hdfsproxy/TestProxyUtil/testSendCommand/],
  etc.
 {noformat}
 junit.framework.AssertionFailedError: null
   at 
 org.apache.hadoop.hdfsproxy.TestProxyUtil.testSendCommand(TestProxyUtil.java:43)
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-854) Datanode should scan devices in parallel to generate block report

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853942#action_12853942
 ] 

Hudson commented on HDFS-854:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Datanode should scan devices in parallel to generate block report
 -

 Key: HDFS-854
 URL: https://issues.apache.org/jira/browse/HDFS-854
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.22.0
Reporter: dhruba borthakur
Assignee: Dmytro Molkov
 Fix For: 0.22.0

 Attachments: HDFS-854-2.patch, HDFS-854.patch, HDFS-854.patch.1


 A Datanode should scan its disk devices in parallel so that the time to 
 generate a block report is reduced. This will reduce the startup time of a 
 cluster.
 A datanode has 12 disk (each of 1 TB) to store HDFS blocks. There is a total 
 of 150K blocks on these 12 disks. It takes the datanode upto 20 minutes to 
 scan these devices to generate the first block report.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-938) Replace calls to UGI.getUserName() with UGI.getShortUserName()

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853950#action_12853950
 ] 

Hudson commented on HDFS-938:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Replace calls to UGI.getUserName() with UGI.getShortUserName()
 --

 Key: HDFS-938
 URL: https://issues.apache.org/jira/browse/HDFS-938
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client, name-node
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: contrib.ivy.jackson.patch, contrib.ivy.jackson.patch-1, 
 contrib.ivy.jackson.patch-1, contrib.ivy.jackson.patch-3, 
 HDFS-938-BP20-1.patch, HDFS-938-BP20-2.patch, HDFS-938.patch


 HADOOP-6526 details why UGI.getUserName() will not work to identify users. 
 Until the proposed UGI.getLocalName() is implemented, calls to getUserName() 
 should be replaced with the short name. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-939) libhdfs test is broken

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853951#action_12853951
 ] 

Hudson commented on HDFS-939:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 libhdfs test is broken
 --

 Key: HDFS-939
 URL: https://issues.apache.org/jira/browse/HDFS-939
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/libhdfs
Affects Versions: 0.22.0
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Blocker
 Fix For: 0.22.0

 Attachments: hdfs-939-1.patch


 The libhdfs test currently does not run because hadoop.tmp.dir is specified 
 as a relative path, and it looks like a side-effect of HDFS-873 was that 
 relative paths get made absolute, so build/test/libhdfs gets turned into 
 /test/libhdfs, which the NN can not create. Let's make the test generate conf 
 files that use the appropriate directory (build/test/libhdfs) specified by 
 fully qualified URIs. 
 Also, are relative paths in conf files supported? If not rather than fail we 
 should detect this and print a warning.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1032) Extend DFSck with an option to list corrupt files using API from HDFS-729

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853949#action_12853949
 ] 

Hudson commented on HDFS-1032:
--

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Extend DFSck with an option to list corrupt files using API from HDFS-729
 -

 Key: HDFS-1032
 URL: https://issues.apache.org/jira/browse/HDFS-1032
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Reporter: Rodrigo Schmidt
Assignee: André Oriani
 Attachments: hdfs-1032_aoriani.patch, hdfs-1032_aoriani_2.patch, 
 hdfs-1032_aoriani_3.patch, hdfs-1032_aoriani_4.patch


 HDFS-729 created a new API to namenode that returns the list of corrupt files.
 We can now extend fsck (DFSck.java) to add an option (e.g. --list_corrupt) 
 that queries the namenode using the new API and lists the corrupt blocks to 
 the users.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-914) Refactor DFSOutputStream and DFSInputStream out of DFSClient

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853948#action_12853948
 ] 

Hudson commented on HDFS-914:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Refactor DFSOutputStream and DFSInputStream out of DFSClient
 

 Key: HDFS-914
 URL: https://issues.apache.org/jira/browse/HDFS-914
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hdfs-914.txt, hdfs-914.txt, hdfs-914.txt


 I'd like to propose refactoring DFSClient to extract DFSOutputStream and 
 DFSInputStream into a new org.apache.hadoop.hdfs.client package. DFSClient 
 has become unmanageably large, containing 8 inner classes.and approaching 
 4kloc. Factoring out the non-static inner classes will also make them easier 
 to test in isolation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-907) Add tests for getBlockLocations and totalLoad metrics.

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853955#action_12853955
 ] 

Hudson commented on HDFS-907:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Add  tests for getBlockLocations and totalLoad metrics. 
 

 Key: HDFS-907
 URL: https://issues.apache.org/jira/browse/HDFS-907
 Project: Hadoop HDFS
  Issue Type: Test
  Components: name-node
Affects Versions: 0.20.2
 Environment: This jira will add more tests for metrics reported by 
 NameNode  . numGetBlockLocations  and totalLoad.
Reporter: Ravi Phulari
Assignee: Ravi Phulari
Priority: Minor
 Fix For: 0.20.2, 0.21.0, 0.22.0

 Attachments: HDFS-907-0.20.patch, HDFS-907.patch, HDFS-907.v2.patch, 
 HDFS-907v3.patch, HDFS-907v3.patch, HDFS907s.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-991) Allow browsing the filesystem over http using delegation tokens

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853954#action_12853954
 ] 

Hudson commented on HDFS-991:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Allow browsing the filesystem over http using delegation tokens
 ---

 Key: HDFS-991
 URL: https://issues.apache.org/jira/browse/HDFS-991
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.22.0

 Attachments: h-991.patch, h-991.patch, h-991.patch, h-991.patch, 
 h-991.patch


 Assuming the user authenticates to the NameNode in the browser, allow them to 
 browse the file system by adding a delegation token the the url when it is 
 redirected to a datanode.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-859) fuse-dfs utime behavior causes issues with tar

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853952#action_12853952
 ] 

Hudson commented on HDFS-859:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 fuse-dfs utime behavior causes issues with tar
 --

 Key: HDFS-859
 URL: https://issues.apache.org/jira/browse/HDFS-859
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Reporter: Brian Bockelman
Assignee: Brian Bockelman
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-859-2.patch, HDFS-859.patch


 When trying to untar files onto fuse-dfs, tar will try to set the utime on 
 all the files and directories.  However, setting the utime on a directory in 
 libhdfs causes an error.
 We should silently ignore the failure of setting a utime on a directory; this 
 will allow tar to complete successfully.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-520) Create new tests for block recovery

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853957#action_12853957
 ] 

Hudson commented on HDFS-520:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Create new tests for block recovery
 ---

 Key: HDFS-520
 URL: https://issues.apache.org/jira/browse/HDFS-520
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Konstantin Boudnik
Assignee: Hairong Kuang
 Fix For: 0.21.0, 0.22.0

 Attachments: blockRecoveryPositive.patch, 
 blockRecoveryPositive1.patch, blockRecoveryPositive2.patch, 
 blockRecoveryPositive2_0.21.patch, 
 TEST-org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.txt


 According to the test plan a number of new features are going to be 
 implemented as a part of this umbrella (HDFS-265) JIRA.
 These new features are have to be tested properly. Block recovery is one of 
 new functionality which require new tests to be developed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-998) The servlets should quote server generated strings sent in the response

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853961#action_12853961
 ] 

Hudson commented on HDFS-998:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 The servlets should quote server generated strings sent in the response
 ---

 Key: HDFS-998
 URL: https://issues.apache.org/jira/browse/HDFS-998
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.22.0
Reporter: Devaraj Das
Assignee: Chris Douglas
 Fix For: 0.22.0

 Attachments: H998-0y20.patch, H998-1.patch, hdfs-998-trunk-v1.patch


 This is the HDFS equivalent of MAPREDUCE-1454.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-861) fuse-dfs does not support O_RDWR

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853956#action_12853956
 ] 

Hudson commented on HDFS-861:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 fuse-dfs does not support O_RDWR
 

 Key: HDFS-861
 URL: https://issues.apache.org/jira/browse/HDFS-861
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Reporter: Brian Bockelman
Assignee: Brian Bockelman
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-861.patch


 Some applications (for us, the big one is rsync) will open a file in 
 read-write mode when it really only intends to read xor write (not both).  
 fuse-dfs should try to not fail until the application actually tries to write 
 to a pre-existing file or read from a newly created file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1067) Create block recovery tests that handle errors

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853960#action_12853960
 ] 

Hudson commented on HDFS-1067:
--

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Create block recovery tests that handle errors
 --

 Key: HDFS-1067
 URL: https://issues.apache.org/jira/browse/HDFS-1067
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.21.0, 0.22.0

 Attachments: blockRecoveryNegativeTests.patch


 This jira is for implementing block recovery tests described in the append 
 test plan section 7: Fault injection tests for block recovery. For all 11 
 test cases, BlockRecoveryFI_01 and 02 have already been implemented in 
 TestInterDatanodeProtocol.java. BlockRecoveryFI_08 has been implemented in 
 HDFS-520. This jira is to implement the rest of the test cases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-919) Create test to validate the BlocksVerified metric

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853953#action_12853953
 ] 

Hudson commented on HDFS-919:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Create test to validate the BlocksVerified metric
 -

 Key: HDFS-919
 URL: https://issues.apache.org/jira/browse/HDFS-919
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.20.2
Reporter: gary murry
 Fix For: 0.20.2, 0.21.0, 0.22.0

 Attachments: HDFS-919.patch, HDFS-919.patch, HDFS-919.patch, 
 HDFS-919_0.20.patch, HDFS-919_2.patch


 Just adding some tests to validate the BlocksVerified metric.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-946) NameNode should not return full path name when lisitng a diretory or getting the status of a file

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853958#action_12853958
 ] 

Hudson commented on HDFS-946:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 NameNode should not return full path name when lisitng a diretory or getting 
 the status of a file
 -

 Key: HDFS-946
 URL: https://issues.apache.org/jira/browse/HDFS-946
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: HdfsFileStatus-yahoo20.patch, HDFSFileStatus.patch, 
 HDFSFileStatus1.patch, HdfsFileStatus3.patch, HdfsFileStatus4.patch, 
 HdfsFileStatusProxy-Yahoo20.patch


 FSDirectory#getListring(String src) has the following code:
   int i = 0;
   for (INode cur : contents) {
 listing[i] = createFileStatus(srcs+cur.getLocalName(), cur);
 i++;
   }
 So listing a directory will return an array of FileStatus. Each FileStatus 
 element has the full path name. This increases the return message size and 
 adds non-negligible CPU time to the operation.
 FSDirectory#getFileInfo(String) does not need to return the file name either.
 Another optimization is that in the version of FileStatus that's used in the 
 wire protocol, the field path does not need to be Path; It could be a String 
 or a byte array ideally. This could avoid unnecessary creation of the Path 
 objects at NameNode, thus help reduce the GC problem observed when a large 
 number of getFileInfo or getListing operations hit NameNode.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-826) Allow a mechanism for an application to detect that datanode(s) have died in the write pipeline

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853959#action_12853959
 ] 

Hudson commented on HDFS-826:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Allow a mechanism for an application to detect that datanode(s)  have died in 
 the write pipeline
 

 Key: HDFS-826
 URL: https://issues.apache.org/jira/browse/HDFS-826
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Fix For: 0.22.0

 Attachments: HDFS-826-0.20-v2.patch, HDFS-826-0.20.patch, 
 Replicable4.txt, ReplicableHdfs.txt, ReplicableHdfs2.txt, ReplicableHdfs3.txt


 HDFS does not replicate the last block of the file that is being currently 
 written to by an application. Every datanode death in the write pipeline 
 decreases the reliability of the last block of the currently-being-written 
 block. This situation can be improved if the application can be notified of a 
 datanode death in the write pipeline. Then, the application can decide what 
 is the right course of action to be taken on this event.
 In our use-case, the application can close the file on the first datanode 
 death, and start writing to a newly created file. This ensures that the 
 reliability guarantee of a block is close to 3 at all time.
 One idea is to make DFSOutoutStream. write() throw an exception if the number 
 of datanodes in the write pipeline fall below minimum.replication.factor that 
 is set on the client (this is backward compatible).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-999) Secondary namenode should login using kerberos if security is configured

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853963#action_12853963
 ] 

Hudson commented on HDFS-999:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Secondary namenode should login using kerberos if security is configured
 

 Key: HDFS-999
 URL: https://issues.apache.org/jira/browse/HDFS-999
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Attachments: HDFS-999-BP20.patch, HDFS-999.patch


 Right now, if NameNode is configured to use Kerberos, SecondaryNameNode will 
 fail to start.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-984) Delegation Tokens should be persisted in Namenode

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853967#action_12853967
 ] 

Hudson commented on HDFS-984:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Delegation Tokens should be persisted in Namenode
 -

 Key: HDFS-984
 URL: https://issues.apache.org/jira/browse/HDFS-984
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Fix For: 0.22.0

 Attachments: HDFS-984-0_20.4.patch, HDFS-984.10.patch, 
 HDFS-984.11.patch, HDFS-984.12.patch, HDFS-984.14.patch, HDFS-984.7.patch


 The Delegation tokens should be persisted in the FsImage and EditLogs so that 
 they are valid to be used after namenode shutdown and restart.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-961) dfs_readdir incorrectly parses paths

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853962#action_12853962
 ] 

Hudson commented on HDFS-961:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 dfs_readdir incorrectly parses paths
 

 Key: HDFS-961
 URL: https://issues.apache.org/jira/browse/HDFS-961
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/fuse-dfs
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 0.22.0

 Attachments: hdfs-961-1.patch, hdfs-961-2.patch


 fuse-dfs dfs_readdir assumes that DistributedFileSystem#listStatus returns 
 Paths with the same scheme/authority as the dfs.name.dir used to connect. If 
 NameNode.DEFAULT_PORT port is used listStatus returns Paths that have 
 authorities without the port (see HDFS-960), which breaks the following code. 
 {code}
 // hack city: todo fix the below to something nicer and more maintainable but
 // with good performance
 // strip off the path but be careful if the path is solely '/'
 // NOTE - this API started returning filenames as full dfs uris
 const char *const str = info[i].mName + dfs-dfs_uri_len + path_len + 
 ((path_len == 1  *path == '/') ? 0 : 1);
 {code}
 Let's make the path parsing here more robust. listStatus returns normalized 
 paths so we can find the start of the path by searching for the 3rd slash. A 
 more long term solution is to have hdfsFileInfo maintain a path object or at 
 least pointers to the relevant URI components.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1046) Build fails trying to download an old version of tomcat

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853964#action_12853964
 ] 

Hudson commented on HDFS-1046:
--

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Build fails trying to download an old version of tomcat
 ---

 Key: HDFS-1046
 URL: https://issues.apache.org/jira/browse/HDFS-1046
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, contrib/hdfsproxy
Reporter: gary murry
Assignee: Srikanth Sundarrajan
Priority: Blocker
 Fix For: 0.21.0, 0.22.0

 Attachments: h1046_20100326.patch


 It looks like HDFSProxy is trying to get an old version of tomcat (6.0.18).  
 /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/contrib/hdfsproxy/build.xml:292:
  org.codehaus.cargo.container.ContainerException: Failed to download 
 [http://apache.osuosl.org/tomcat/tomcat-6/v6.0.18/bin/apache-tomcat-6.0.18.zip]
 Looking at http://apache.osuosl.org/tomcat/tomcat-6/ , it looks like the only 
 two version available are 6.0.24 and 6.0.26.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-927) DFSInputStream retries too many times for new block locations

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853965#action_12853965
 ] 

Hudson commented on HDFS-927:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 DFSInputStream retries too many times for new block locations
 -

 Key: HDFS-927
 URL: https://issues.apache.org/jira/browse/HDFS-927
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.21.0, 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Fix For: 0.20.2, 0.21.0

 Attachments: hdfs-927-branch-0.21.txt, hdfs-927-branch0.20.txt, 
 hdfs-927.txt


 I think this is a regression caused by HDFS-127 -- DFSInputStream is supposed 
 to only go back to the NN max.block.acquires times, but in trunk it goes back 
 twice as many - the default is 3, but I am counting 7 calls to 
 getBlockLocations before an exception is thrown.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-994) Provide methods for obtaining delegation token from Namenode for hftp and other uses

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853968#action_12853968
 ] 

Hudson commented on HDFS-994:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Provide methods for obtaining delegation token from Namenode for hftp and 
 other uses
 

 Key: HDFS-994
 URL: https://issues.apache.org/jira/browse/HDFS-994
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Fix For: 0.22.0

 Attachments: HDFS-994-0_20.1.patch, HDFS-994-2.patch, 
 HDFS-994-3.patch, HDFS-994-4.patch, HDFS-994-5.patch, HDFS-994.patch


 In hftp, destination clusters will require an RPC-version-agnostic means of 
 obtaining delegation tokens from the source cluster. The easiest method is 
 provide a webservice to retrieve a token over http.  This can be encrypted 
 via SSL (backed by Kerberos, done in another JIRA), providing security for 
 cross-cluster hftp operations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-965) TestDelegationToken fails in trunk

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853969#action_12853969
 ] 

Hudson commented on HDFS-965:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 TestDelegationToken fails in trunk
 --

 Key: HDFS-965
 URL: https://issues.apache.org/jira/browse/HDFS-965
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Fix For: 0.22.0

 Attachments: HDFS-965.1.patch, HDFS-965.2.patch


 TestDelegationToken is failing in trunk because of superuser authorization 
 check. The superuser group and ip are required to be configured in the test.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1024) SecondaryNamenode fails to checkpoint because namenode fails with CancelledKeyException

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853970#action_12853970
 ] 

Hudson commented on HDFS-1024:
--

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 SecondaryNamenode fails to checkpoint because namenode fails with 
 CancelledKeyException
 ---

 Key: HDFS-1024
 URL: https://issues.apache.org/jira/browse/HDFS-1024
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.22.0
Reporter: dhruba borthakur
Assignee: Dmytro Molkov
Priority: Blocker
 Fix For: 0.22.0

 Attachments: HDFS-1024.patch, HDFS-1024.patch.1, 
 HDFS-1024.patch.1-0.20.txt


 The secondary namenode fails to retrieve the entire fsimage from the 
 Namenode. It fetches a part of the fsimage but believes that it has fetched 
 the entire fsimage file and proceeds ahead with the checkpointing. Stack 
 traces will be attached below.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1043) Benchmark overhead of server-side group resolution of users

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853971#action_12853971
 ] 

Hudson commented on HDFS-1043:
--

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Benchmark overhead of server-side group resolution of users
 ---

 Key: HDFS-1043
 URL: https://issues.apache.org/jira/browse/HDFS-1043
 Project: Hadoop HDFS
  Issue Type: Test
  Components: benchmarks
Affects Versions: 0.22.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 0.22.0

 Attachments: UGCRefresh.patch, UGCRefresh.patch


 Server-side user group resolution was introduced in HADOOP-4656. 
 The benchmark should repeatedly request the name-node for user group 
 resolution, and reset NN's user group cache periodically.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-968) s/StringBuffer/StringBuilder - as necessary

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853972#action_12853972
 ] 

Hudson commented on HDFS-968:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 s/StringBuffer/StringBuilder - as necessary
 ---

 Key: HDFS-968
 URL: https://issues.apache.org/jira/browse/HDFS-968
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kay Kay
Assignee: Kay Kay
 Fix For: 0.22.0

 Attachments: HDFS-968.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-986) Push HADOOP-6551 into HDFS

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853973#action_12853973
 ] 

Hudson commented on HDFS-986:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Push HADOOP-6551 into HDFS
 --

 Key: HDFS-986
 URL: https://issues.apache.org/jira/browse/HDFS-986
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.22.0

 Attachments: h-986-1.patch, h-986.patch


 We need to throw readable error messages instead of returning false on errors.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-894) DatanodeID.ipcPort is not updated when existing node re-registers

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853974#action_12853974
 ] 

Hudson commented on HDFS-894:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 DatanodeID.ipcPort is not updated when existing node re-registers
 -

 Key: HDFS-894
 URL: https://issues.apache.org/jira/browse/HDFS-894
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20.1, 0.21.0, 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.22.0

 Attachments: hdfs-894.txt


 In FSNamesystem.registerDatanode, it checks if a registering node is a 
 reregistration of an old one based on storage ID. If so, it simply updates 
 the old one with the new registration info. However, the new ipcPort is lost 
 when this happens.
 I produced manually this by setting up a DN with IPC port set to 0 (so it 
 picks an ephemeral port) and then restarting the DN. At this point, the NN's 
 view of the ipcPort is stale, and clients will not be able to achieve 
 pipeline recovery.
 This should be easy to fix and unit test, but not sure when I'll get to it, 
 so anyone else should feel free to grab it if they get to it first.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-949) Move Delegation token into Common so that we can use it for MapReduce also

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853975#action_12853975
 ] 

Hudson commented on HDFS-949:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 Move Delegation token into Common so that we can use it for MapReduce also
 --

 Key: HDFS-949
 URL: https://issues.apache.org/jira/browse/HDFS-949
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.22.0

 Attachments: 6547-949-1470-0_20.1.patch, h-949-common.patch, 
 h-949-common.patch, h-949.patch, h-949.patch, h-949.patch


 We need to support a MapReduce job that launches another MapReduce job inside 
 its Mapper. Since the task doesn't have any Kerberos tickets, we need a 
 delegation token. Moving the HDFS Delegation token to Common will allow both 
 projects to use it.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-985) HDFS should issue multiple RPCs for listing a large directory

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12853976#action_12853976
 ] 

Hudson commented on HDFS-985:
-

Integrated in Hadoop-Hdfs-trunk #275 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/275/])


 HDFS should issue multiple RPCs for listing a large directory
 -

 Key: HDFS-985
 URL: https://issues.apache.org/jira/browse/HDFS-985
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.22.0

 Attachments: directoryBrowse_0.20yahoo.patch, 
 directoryBrowse_0.20yahoo_1.patch, directoryBrowse_0.20yahoo_2.patch, 
 iterativeLS_trunk.patch, iterativeLS_trunk1.patch, iterativeLS_trunk2.patch, 
 iterativeLS_trunk3.patch, iterativeLS_trunk3.patch, iterativeLS_trunk4.patch, 
 iterativeLS_yahoo.patch, iterativeLS_yahoo1.patch, testFileStatus.patch


 Currently HDFS issues one RPC from the client to the NameNode for listing a 
 directory. However some directories are large that contain thousands or 
 millions of items. Listing such large directories in one RPC has a few 
 shortcomings:
 1. The list operation holds the global fsnamesystem lock for a long time thus 
 blocking other requests. If a large number (like thousands) of such list 
 requests hit NameNode in a short period of time, NameNode will be 
 significantly slowed down. Users end up noticing longer response time or lost 
 connections to NameNode.
 2. The response message is uncontrollable big. We observed a response as big 
 as 50M bytes when listing a directory of 300 thousand items. Even with the 
 optimization introduced at HDFS-946 that may be able to cut the response by 
 20-50%, the response size will still in the magnitude of 10 mega bytes.
 I propose to implement a directory listing using multiple RPCs. Here is the 
 plan:
 1. Each getListing RPC has an upper limit on the number of items returned.  
 This limit could be configurable, but I am thinking to set it to be a fixed 
 number like 500.
 2. Each RPC additionally specifies a start position for this listing request. 
 I am thinking to use the last item of the previous listing RPC as an 
 indicator. Since NameNode stores all items in a directory as a sorted array, 
 NameNode uses the last item to locate the start item of this listing even if 
 the last item is deleted in between these two consecutive calls. This has the 
 advantage of avoid duplicate entries at the client side.
 3. The return value additionally specifies if the whole directory is done 
 listing. If the client sees a false flag, it will continue to issue another 
 RPC.
 This proposal will change the semantics of large directory listing in a sense 
 that listing is no longer an atomic operation if a directory's content is 
 changing while the listing operation is in progress.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-481) Bug Fixes + HdfsProxy to use proxy user to impresonate the real user

2010-04-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-481:


Hadoop Flags: [Reviewed]

+1 patch looks good.

 Bug Fixes + HdfsProxy to use proxy user to impresonate the real user
 

 Key: HDFS-481
 URL: https://issues.apache.org/jira/browse/HDFS-481
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Affects Versions: 0.21.0
Reporter: zhiyong zhang
Assignee: Srikanth Sundarrajan
 Attachments: HDFS-481-bp-y20.patch, HDFS-481-bp-y20s.patch, 
 HDFS-481.out, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, 
 HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch


 Bugs:
 1. hadoop-version is not recognized if run ant command from src/contrib/ or 
 from src/contrib/hdfsproxy  
 If running ant command from $HADOOP_HDFS_HOME, hadoop-version will be passed 
 to contrib's build through subant. But if running from src/contrib or 
 src/contrib/hdfsproxy, the hadoop-version will not be recognized. 
 2. LdapIpDirFilter.java is not thread safe. userName, Group  Paths are per 
 request and can't be class members.
 3. Addressed the following StackOverflowError. 
 ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localh
 ost].[/].[proxyForward]] Servlet.service() for servlet proxyForward threw 
 exception
 java.lang.StackOverflowError
 at 
 org.apache.catalina.core.ApplicationHttpRequest.getAttribute(ApplicationHttpR
 equest.java:229)
  This is due to when the target war (/target.war) does not exist, the 
 forwarding war will forward to its parent context path /, which defines the 
 forwarding war itself. This cause infinite loop.  Added HDFS Proxy 
 Forward.equals(dstContext.getServletContextName() in the if logic to break 
 the loop.
 4. Kerberos credentials of remote user aren't available. HdfsProxy needs to 
 act on behalf of the real user to service the requests

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-955) FSImage.saveFSImage can lose edits

2010-04-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-955:
-

Attachment: saveNamespace-0.20.patch
saveNamespace-0.21.patch

Here are the new patches for 0.21 and 0.20 branches.

 FSImage.saveFSImage can lose edits
 --

 Key: HDFS-955
 URL: https://issues.apache.org/jira/browse/HDFS-955
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1, 0.21.0, 0.22.0
Reporter: Todd Lipcon
Assignee: Konstantin Shvachko
Priority: Blocker
 Attachments: FSStateTransition7.htm, hdfs-955-moretests.txt, 
 hdfs-955-unittest.txt, PurgeEditsBeforeImageSave.patch, 
 saveNamespace-0.20.patch, saveNamespace-0.20.patch, saveNamespace-0.21.patch, 
 saveNamespace-0.21.patch, saveNamespace.patch, saveNamespace.patch, 
 saveNamespace.patch, saveNamespace.patch, saveNamespace.txt


 This is a continuation of a discussion from HDFS-909. The FSImage.saveFSImage 
 function (implementing dfsadmin -saveNamespace) can corrupt the NN storage 
 such that all current edits are lost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-481) Bug Fixes + HdfsProxy to use proxy user to impresonate the real user

2010-04-06 Thread Srikanth Sundarrajan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854080#action_12854080
 ] 

Srikanth Sundarrajan commented on HDFS-481:
---

Output from test-patch

 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 15 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.

test-contrib:


test:
BUILD SUCCESSFUL



 Bug Fixes + HdfsProxy to use proxy user to impresonate the real user
 

 Key: HDFS-481
 URL: https://issues.apache.org/jira/browse/HDFS-481
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Affects Versions: 0.21.0
Reporter: zhiyong zhang
Assignee: Srikanth Sundarrajan
 Attachments: HDFS-481-bp-y20.patch, HDFS-481-bp-y20s.patch, 
 HDFS-481.out, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, 
 HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch


 Bugs:
 1. hadoop-version is not recognized if run ant command from src/contrib/ or 
 from src/contrib/hdfsproxy  
 If running ant command from $HADOOP_HDFS_HOME, hadoop-version will be passed 
 to contrib's build through subant. But if running from src/contrib or 
 src/contrib/hdfsproxy, the hadoop-version will not be recognized. 
 2. LdapIpDirFilter.java is not thread safe. userName, Group  Paths are per 
 request and can't be class members.
 3. Addressed the following StackOverflowError. 
 ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localh
 ost].[/].[proxyForward]] Servlet.service() for servlet proxyForward threw 
 exception
 java.lang.StackOverflowError
 at 
 org.apache.catalina.core.ApplicationHttpRequest.getAttribute(ApplicationHttpR
 equest.java:229)
  This is due to when the target war (/target.war) does not exist, the 
 forwarding war will forward to its parent context path /, which defines the 
 forwarding war itself. This cause infinite loop.  Added HDFS Proxy 
 Forward.equals(dstContext.getServletContextName() in the if logic to break 
 the loop.
 4. Kerberos credentials of remote user aren't available. HdfsProxy needs to 
 act on behalf of the real user to service the requests

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1082) CHANGES.txt in the last three branches diverged

2010-04-06 Thread Konstantin Shvachko (JIRA)
CHANGES.txt in the last three branches diverged
---

 Key: HDFS-1082
 URL: https://issues.apache.org/jira/browse/HDFS-1082
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Konstantin Shvachko
 Fix For: 0.20.3


Particularly, CHANGES.txt in hdfs trunk and 0.21 don't reflect that 0.20.2 has 
been released, there is no section for 0.20.3, and the diff on the fixed issues 
is not uniform.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-481) Bug Fixes + HdfsProxy to use proxy user to impresonate the real user

2010-04-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-481.
-

Resolution: Fixed

I also have tested it locally.  It worked fine.

I have committed this.  Thanks, Srikanth!

 Bug Fixes + HdfsProxy to use proxy user to impresonate the real user
 

 Key: HDFS-481
 URL: https://issues.apache.org/jira/browse/HDFS-481
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Affects Versions: 0.21.0
Reporter: zhiyong zhang
Assignee: Srikanth Sundarrajan
 Attachments: HDFS-481-bp-y20.patch, HDFS-481-bp-y20s.patch, 
 HDFS-481.out, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, 
 HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch


 Bugs:
 1. hadoop-version is not recognized if run ant command from src/contrib/ or 
 from src/contrib/hdfsproxy  
 If running ant command from $HADOOP_HDFS_HOME, hadoop-version will be passed 
 to contrib's build through subant. But if running from src/contrib or 
 src/contrib/hdfsproxy, the hadoop-version will not be recognized. 
 2. LdapIpDirFilter.java is not thread safe. userName, Group  Paths are per 
 request and can't be class members.
 3. Addressed the following StackOverflowError. 
 ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localh
 ost].[/].[proxyForward]] Servlet.service() for servlet proxyForward threw 
 exception
 java.lang.StackOverflowError
 at 
 org.apache.catalina.core.ApplicationHttpRequest.getAttribute(ApplicationHttpR
 equest.java:229)
  This is due to when the target war (/target.war) does not exist, the 
 forwarding war will forward to its parent context path /, which defines the 
 forwarding war itself. This cause infinite loop.  Added HDFS Proxy 
 Forward.equals(dstContext.getServletContextName() in the if logic to break 
 the loop.
 4. Kerberos credentials of remote user aren't available. HdfsProxy needs to 
 act on behalf of the real user to service the requests

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1082) CHANGES.txt in the last three branches diverged

2010-04-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1082:
--

Component/s: documentation
 Issue Type: Improvement  (was: Bug)

 CHANGES.txt in the last three branches diverged
 ---

 Key: HDFS-1082
 URL: https://issues.apache.org/jira/browse/HDFS-1082
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.20.2
Reporter: Konstantin Shvachko
 Fix For: 0.20.3


 Particularly, CHANGES.txt in hdfs trunk and 0.21 don't reflect that 0.20.2 
 has been released, there is no section for 0.20.3, and the diff on the fixed 
 issues is not uniform.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1082) CHANGES.txt in the last three branches diverged

2010-04-06 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854092#action_12854092
 ] 

Chris Douglas commented on HDFS-1082:
-

There is no section tracking pre-0.21 changes in HDFS and MAPREDUCE, since 
that's when the project split occurred. The pre-0.21 changes are tracked with 
the 0.20 branch, which remains in COMMON.

It's awkward, but consistent.

 CHANGES.txt in the last three branches diverged
 ---

 Key: HDFS-1082
 URL: https://issues.apache.org/jira/browse/HDFS-1082
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.20.2
Reporter: Konstantin Shvachko
 Fix For: 0.20.3


 Particularly, CHANGES.txt in hdfs trunk and 0.21 don't reflect that 0.20.2 
 has been released, there is no section for 0.20.3, and the diff on the fixed 
 issues is not uniform.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1007) HFTP needs to be updated to use delegation tokens

2010-04-06 Thread Boris Shkolnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Shkolnik updated HDFS-1007:
-

Attachment: HDFS-1007-BP20-fix-3.patch

 HFTP needs to be updated to use delegation tokens
 -

 Key: HDFS-1007
 URL: https://issues.apache.org/jira/browse/HDFS-1007
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 0.22.0
Reporter: Devaraj Das
Assignee: Devaraj Das
 Fix For: 0.22.0

 Attachments: distcp-hftp-2.1.1.patch, distcp-hftp.1.patch, 
 distcp-hftp.2.1.patch, distcp-hftp.2.patch, distcp-hftp.patch, 
 HDFS-1007-BP20-fix-1.patch, HDFS-1007-BP20-fix-2.patch, 
 HDFS-1007-BP20-fix-3.patch, HDFS-1007-BP20.patch


 HFTPFileSystem should be updated to use the delegation tokens so that it can 
 talk to the secure namenodes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-482) change HsftpFileSystem's ssl.client.do.not.authenticate.server configuration setting to ssl-client.xml

2010-04-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854103#action_12854103
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-482:
-

+1 patch looks good.

 change HsftpFileSystem's ssl.client.do.not.authenticate.server configuration 
 setting to ssl-client.xml  
 

 Key: HDFS-482
 URL: https://issues.apache.org/jira/browse/HDFS-482
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
 Environment: currently this config setting can only be set by hdfs's 
 configuration files, need to move this setting to ssl-client.xml.
Reporter: zhiyong zhang
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-482.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-955) FSImage.saveFSImage can lose edits

2010-04-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-955:
-

   Resolution: Fixed
Fix Version/s: 0.20.3
 Hadoop Flags: [Reviewed]
   Status: Resolved  (was: Patch Available)

I just committed this.

 FSImage.saveFSImage can lose edits
 --

 Key: HDFS-955
 URL: https://issues.apache.org/jira/browse/HDFS-955
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1, 0.21.0, 0.22.0
Reporter: Todd Lipcon
Assignee: Konstantin Shvachko
Priority: Blocker
 Fix For: 0.20.3

 Attachments: FSStateTransition7.htm, hdfs-955-moretests.txt, 
 hdfs-955-unittest.txt, PurgeEditsBeforeImageSave.patch, 
 saveNamespace-0.20.patch, saveNamespace-0.20.patch, saveNamespace-0.21.patch, 
 saveNamespace-0.21.patch, saveNamespace.patch, saveNamespace.patch, 
 saveNamespace.patch, saveNamespace.patch, saveNamespace.txt


 This is a continuation of a discussion from HDFS-909. The FSImage.saveFSImage 
 function (implementing dfsadmin -saveNamespace) can corrupt the NN storage 
 such that all current edits are lost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-957) FSImage layout version should be only once file is complete

2010-04-06 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854114#action_12854114
 ] 

Konstantin Shvachko commented on HDFS-957:
--

Should we close this with HDFS-955 in?

 FSImage layout version should be only once file is complete
 ---

 Key: HDFS-957
 URL: https://issues.apache.org/jira/browse/HDFS-957
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hdfs-957.txt


 Right now, the FSImage save code writes the LAYOUT_VERSION at the head of the 
 file, along with some other headers, and then dumps the directory into the 
 file. Instead, it should write a special IMAGE_IN_PROGRESS entry for the 
 layout version, dump all of the data, then seek back to the head of the file 
 to write the proper LAYOUT_VERSION. This would make it very easy to detect 
 the case where the FSImage save got interrupted.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-481) Bug Fixes + HdfsProxy to use proxy user to impresonate the real user

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854116#action_12854116
 ] 

Hudson commented on HDFS-481:
-

Integrated in Hadoop-Hdfs-trunk-Commit #230 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/230/])
. hdfsproxy: Bug Fixes + HdfsProxy to use proxy user to impresonate the 
real user.  Contributed by Srikanth


 Bug Fixes + HdfsProxy to use proxy user to impresonate the real user
 

 Key: HDFS-481
 URL: https://issues.apache.org/jira/browse/HDFS-481
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Affects Versions: 0.21.0
Reporter: zhiyong zhang
Assignee: Srikanth Sundarrajan
 Attachments: HDFS-481-bp-y20.patch, HDFS-481-bp-y20s.patch, 
 HDFS-481.out, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, 
 HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch


 Bugs:
 1. hadoop-version is not recognized if run ant command from src/contrib/ or 
 from src/contrib/hdfsproxy  
 If running ant command from $HADOOP_HDFS_HOME, hadoop-version will be passed 
 to contrib's build through subant. But if running from src/contrib or 
 src/contrib/hdfsproxy, the hadoop-version will not be recognized. 
 2. LdapIpDirFilter.java is not thread safe. userName, Group  Paths are per 
 request and can't be class members.
 3. Addressed the following StackOverflowError. 
 ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localh
 ost].[/].[proxyForward]] Servlet.service() for servlet proxyForward threw 
 exception
 java.lang.StackOverflowError
 at 
 org.apache.catalina.core.ApplicationHttpRequest.getAttribute(ApplicationHttpR
 equest.java:229)
  This is due to when the target war (/target.war) does not exist, the 
 forwarding war will forward to its parent context path /, which defines the 
 forwarding war itself. This cause infinite loop.  Added HDFS Proxy 
 Forward.equals(dstContext.getServletContextName() in the if logic to break 
 the loop.
 4. Kerberos credentials of remote user aren't available. HdfsProxy needs to 
 act on behalf of the real user to service the requests

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1009) Allow HDFSProxy to impersonate the real user while processing user request

2010-04-06 Thread Srikanth Sundarrajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Sundarrajan updated HDFS-1009:
---

Fix Version/s: 0.22.0
 Release Note:   (was: HDFS-481)
   Status: Patch Available  (was: Reopened)

 Allow HDFSProxy to impersonate the real user while processing user request
 --

 Key: HDFS-1009
 URL: https://issues.apache.org/jira/browse/HDFS-1009
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
Reporter: Srikanth Sundarrajan
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-1009.patch


 HDFSProxy when processing an user request, should perform the operations as 
 the real user.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1009) Allow HDFSProxy to impersonate the real user while processing user request

2010-04-06 Thread Srikanth Sundarrajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Sundarrajan updated HDFS-1009:
---

Attachment: HDFS-1009.patch

Output from test-patch

 [exec] -1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no new tests are needed 
for this patch.
 [exec] Also please list what manual steps were 
performed to verify this patch.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.


test-contrib:

test:
   [cactus] Tomcat 5.x is stopped

BUILD SUCCESSFUL
Total time: 4 minutes 39 seconds



No new tests added with this patch, as the patch is specific to Keberos and the 
current unit test framework doesn't extend itself to test this. However the 
patch has been tested manually.

A keytab file for the proxy user was created and the principal in keytab file 
is configured as proxy user in the Namenode configuration[core-site.xml] 
(hadoop.proxyuser.proxy.users, hadoop.proxyuser.proxy.ip-addresses). Ip address 
configured in Namenode core-site.xml is that of the server where hdfsproxy is 
setup to run and the proxy user is same as the user in the keytab file. With 
this, doAs requests are successful and the requests are able to retrieve files 
only readable by the requesting user or the users' group.

 Allow HDFSProxy to impersonate the real user while processing user request
 --

 Key: HDFS-1009
 URL: https://issues.apache.org/jira/browse/HDFS-1009
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
Reporter: Srikanth Sundarrajan
Assignee: Srikanth Sundarrajan
 Attachments: HDFS-1009.patch


 HDFSProxy when processing an user request, should perform the operations as 
 the real user.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-957) FSImage layout version should be only once file is complete

2010-04-06 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854131#action_12854131
 ] 

Todd Lipcon commented on HDFS-957:
--

I think this is still a good idea, even with that bug fixed. The extra 
safeguard doesn't really cost us anything, and the fsync() is important when 
using a FS with delayed allocation.

 FSImage layout version should be only once file is complete
 ---

 Key: HDFS-957
 URL: https://issues.apache.org/jira/browse/HDFS-957
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hdfs-957.txt


 Right now, the FSImage save code writes the LAYOUT_VERSION at the head of the 
 file, along with some other headers, and then dumps the directory into the 
 file. Instead, it should write a special IMAGE_IN_PROGRESS entry for the 
 layout version, dump all of the data, then seek back to the head of the file 
 to write the proper LAYOUT_VERSION. This would make it very easy to detect 
 the case where the FSImage save got interrupted.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-955) FSImage.saveFSImage can lose edits

2010-04-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854135#action_12854135
 ] 

Hudson commented on HDFS-955:
-

Integrated in Hadoop-Hdfs-trunk-Commit #231 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/231/])
. New implementation of saveNamespace() to avoid loss of edits when 
name-node fails during saving. Contributed by Konstantin Shvachko.


 FSImage.saveFSImage can lose edits
 --

 Key: HDFS-955
 URL: https://issues.apache.org/jira/browse/HDFS-955
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1, 0.21.0, 0.22.0
Reporter: Todd Lipcon
Assignee: Konstantin Shvachko
Priority: Blocker
 Fix For: 0.20.3

 Attachments: FSStateTransition7.htm, hdfs-955-moretests.txt, 
 hdfs-955-unittest.txt, PurgeEditsBeforeImageSave.patch, 
 saveNamespace-0.20.patch, saveNamespace-0.20.patch, saveNamespace-0.21.patch, 
 saveNamespace-0.21.patch, saveNamespace.patch, saveNamespace.patch, 
 saveNamespace.patch, saveNamespace.patch, saveNamespace.txt


 This is a continuation of a discussion from HDFS-909. The FSImage.saveFSImage 
 function (implementing dfsadmin -saveNamespace) can corrupt the NN storage 
 such that all current edits are lost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1012) documentLocation attribute in LdapEntry for HDFSProxy isn't specific to a cluster

2010-04-06 Thread Srikanth Sundarrajan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854144#action_12854144
 ] 

Srikanth Sundarrajan commented on HDFS-1012:


Note: HDFS-481, HDFS-1009 and HDFS-1010 will need to be committed before this 
patch can be applied

Output from test-patch  test-contrib

 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 4 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.


test:

BUILD SUCCESSFUL
Total time: 4 minutes 32 seconds


 documentLocation attribute in LdapEntry for HDFSProxy isn't specific to a 
 cluster
 -

 Key: HDFS-1012
 URL: https://issues.apache.org/jira/browse/HDFS-1012
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: contrib/hdfsproxy
Affects Versions: 0.20.1, 0.20.2, 0.21.0, 0.22.0
Reporter: Srikanth Sundarrajan
Assignee: Ramesh Sekaran
 Fix For: 0.20.1, 0.22.0

 Attachments: HDFS-1012-bp-y20.patch, HDFS-1012-bp-y20s.patch, 
 HDFS-1012.patch


 List of allowed document locations accessible through HDFSProxy isn't 
 specific to a cluster. LDAP entries can include the name of the cluster to 
 which the path belongs to have better control on which clusters/paths are 
 accessible through HDFSProxy by the user.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1009) Allow HDFSProxy to impersonate the real user while processing user request

2010-04-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854154#action_12854154
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-1009:
--

Patch looks good.  Could you add some javadoc in the header of the new class 
KerberosAuthorizationFilter?

 Allow HDFSProxy to impersonate the real user while processing user request
 --

 Key: HDFS-1009
 URL: https://issues.apache.org/jira/browse/HDFS-1009
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
Reporter: Srikanth Sundarrajan
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-1009.patch


 HDFSProxy when processing an user request, should perform the operations as 
 the real user.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1009) Support Kerberos authorization in HDFSProxy

2010-04-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-1009:
-

Description: We should add a filter to support Kerberos authorization in 
HDFSProxy.  (was: HDFSProxy when processing an user request, should perform the 
operations as the real user.)
Summary: Support Kerberos authorization in HDFSProxy  (was: Allow 
HDFSProxy to impersonate the real user while processing user request)

 Support Kerberos authorization in HDFSProxy
 ---

 Key: HDFS-1009
 URL: https://issues.apache.org/jira/browse/HDFS-1009
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
Reporter: Srikanth Sundarrajan
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-1009.patch


 We should add a filter to support Kerberos authorization in HDFSProxy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1009) Support Kerberos authorization in HDFSProxy

2010-04-06 Thread Srikanth Sundarrajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Sundarrajan updated HDFS-1009:
---

Attachment: HDFS-1009.patch

{quote}
Patch looks good. Could you add some javadoc in the header of the new class 
KerberosAuthorizationFilter? 
{quote}

Nicholas, Thanks for taking time to review the patch. Have uploaded a revised 
patch which includes javadoc for the class.

 Support Kerberos authorization in HDFSProxy
 ---

 Key: HDFS-1009
 URL: https://issues.apache.org/jira/browse/HDFS-1009
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
Reporter: Srikanth Sundarrajan
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-1009.patch, HDFS-1009.patch


 We should add a filter to support Kerberos authorization in HDFSProxy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-482) change HsftpFileSystem's ssl.client.do.not.authenticate.server configuration setting to ssl-client.xml

2010-04-06 Thread Srikanth Sundarrajan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854168#action_12854168
 ] 

Srikanth Sundarrajan commented on HDFS-482:
---

 [exec] -1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no new tests are needed 
for this patch.
 [exec] Also please list what manual steps were 
performed to verify this patch.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.

Tests not included for this patch. Patch is fairly simple and reads the ssl 
conf from ssl-client.xml instead of the conf object.


 change HsftpFileSystem's ssl.client.do.not.authenticate.server configuration 
 setting to ssl-client.xml  
 

 Key: HDFS-482
 URL: https://issues.apache.org/jira/browse/HDFS-482
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
 Environment: currently this config setting can only be set by hdfs's 
 configuration files, need to move this setting to ssl-client.xml.
Reporter: zhiyong zhang
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-482.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1009) Support Kerberos authorization in HDFSProxy

2010-04-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-1009:
-

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Srikanth!

 Support Kerberos authorization in HDFSProxy
 ---

 Key: HDFS-1009
 URL: https://issues.apache.org/jira/browse/HDFS-1009
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
Reporter: Srikanth Sundarrajan
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-1009.patch, HDFS-1009.patch


 We should add a filter to support Kerberos authorization in HDFSProxy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1009) Support Kerberos authorization in HDFSProxy

2010-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854180#action_12854180
 ] 

Hadoop QA commented on HDFS-1009:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12440938/HDFS-1009.patch
  against trunk revision 931256.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/303/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/303/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/303/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/303/console

This message is automatically generated.

 Support Kerberos authorization in HDFSProxy
 ---

 Key: HDFS-1009
 URL: https://issues.apache.org/jira/browse/HDFS-1009
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
Reporter: Srikanth Sundarrajan
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-1009.patch, HDFS-1009.patch


 We should add a filter to support Kerberos authorization in HDFSProxy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-482) change HsftpFileSystem's ssl.client.do.not.authenticate.server configuration setting to ssl-client.xml

2010-04-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-482:


  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Srikanth!

 change HsftpFileSystem's ssl.client.do.not.authenticate.server configuration 
 setting to ssl-client.xml  
 

 Key: HDFS-482
 URL: https://issues.apache.org/jira/browse/HDFS-482
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Affects Versions: 0.22.0
 Environment: currently this config setting can only be set by hdfs's 
 configuration files, need to move this setting to ssl-client.xml.
Reporter: zhiyong zhang
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-482.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1072) AlreadyBeingCreatedException with HDFS_NameNode as the lease holder

2010-04-06 Thread Erik Steffl (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12854185#action_12854185
 ] 

Erik Steffl commented on HDFS-1072:
---

Further investigation revealed that the following sequence leads to 
AlreadyBeingCreatedException:

  - LEASE_LIMIT=500; cluster.setLeasePeriod(LEASE_LIMIT, LEASE_LIMIT);

  - thread A gets a lease on a file

  - thread B sleeps 2*soft limit

  - thread B tries to get lease on a file, triggers lease recovery and gets 
RecoveryInProgressException

  - before lease recovery ends, namenode LeaseManager.java:checkLeases finds 
out that hard limit was also expired, start a new recovery, resets timeouts

  - thread B tries to get lease again, timeout is not expired (it was reset in 
previous step) so it gets AlreadyBeingCreatedException

There are two problems in the code that lead to this:

  - hard limit should not be set to such a low value, it makes it very likely 
for recovery to not finish before it's taken over by another recovery (because 
of expired hard limit)

  - namenode should recognize that even though limit is not expired the 
recovery is ongoing and return RecoveryInProgressException instead of 
AlreadyBeingCreatedException (in FSNamesystem.java:startFileInternal, when it's 
deciding what to do if the file is under construction)

 AlreadyBeingCreatedException with HDFS_NameNode as the lease holder
 ---

 Key: HDFS-1072
 URL: https://issues.apache.org/jira/browse/HDFS-1072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Erik Steffl
 Fix For: 0.21.0


 TestReadWhileWriting may fail by AlreadyBeingCreatedException with 
 HDFS_NameNode as the lease holder, which indicates that lease recovery is in 
 an inconsistent state.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-481) Bug Fixes + HdfsProxy to use proxy user to impresonate the real user

2010-04-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-481:


Fix Version/s: 0.22.0

 Bug Fixes + HdfsProxy to use proxy user to impresonate the real user
 

 Key: HDFS-481
 URL: https://issues.apache.org/jira/browse/HDFS-481
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Affects Versions: 0.21.0
Reporter: zhiyong zhang
Assignee: Srikanth Sundarrajan
 Fix For: 0.22.0

 Attachments: HDFS-481-bp-y20.patch, HDFS-481-bp-y20s.patch, 
 HDFS-481.out, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, 
 HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch


 Bugs:
 1. hadoop-version is not recognized if run ant command from src/contrib/ or 
 from src/contrib/hdfsproxy  
 If running ant command from $HADOOP_HDFS_HOME, hadoop-version will be passed 
 to contrib's build through subant. But if running from src/contrib or 
 src/contrib/hdfsproxy, the hadoop-version will not be recognized. 
 2. LdapIpDirFilter.java is not thread safe. userName, Group  Paths are per 
 request and can't be class members.
 3. Addressed the following StackOverflowError. 
 ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localh
 ost].[/].[proxyForward]] Servlet.service() for servlet proxyForward threw 
 exception
 java.lang.StackOverflowError
 at 
 org.apache.catalina.core.ApplicationHttpRequest.getAttribute(ApplicationHttpR
 equest.java:229)
  This is due to when the target war (/target.war) does not exist, the 
 forwarding war will forward to its parent context path /, which defines the 
 forwarding war itself. This cause infinite loop.  Added HDFS Proxy 
 Forward.equals(dstContext.getServletContextName() in the if logic to break 
 the loop.
 4. Kerberos credentials of remote user aren't available. HdfsProxy needs to 
 act on behalf of the real user to service the requests

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



  1   2   >