[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146783#comment-13146783 ] Hadoop QA commented on HDFS-2539: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12503014/h2539_2008.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 14 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.TestBalancerBandwidth +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1545//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1545//console This message is automatically generated. > Support doAs and GETHOMEDIRECTORY in webhdfs > > > Key: HDFS-2539 > URL: https://issues.apache.org/jira/browse/HDFS-2539 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, > h2539_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146770#comment-13146770 ] Hadoop QA commented on HDFS-2539: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12503015/h2539_2008_0.20s.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 14 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1546//console This message is automatically generated. > Support doAs and GETHOMEDIRECTORY in webhdfs > > > Key: HDFS-2539 > URL: https://issues.apache.org/jira/browse/HDFS-2539 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, > h2539_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-2539: - Attachment: h2539_2008_0.20s.patch h2539_2008_0.20s.patch: synced with trunk > Support doAs and GETHOMEDIRECTORY in webhdfs > > > Key: HDFS-2539 > URL: https://issues.apache.org/jira/browse/HDFS-2539 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, > h2539_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-2539: - Status: Patch Available (was: Open) > Support doAs and GETHOMEDIRECTORY in webhdfs > > > Key: HDFS-2539 > URL: https://issues.apache.org/jira/browse/HDFS-2539 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2539_2008.patch, h2539_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-2539: - Attachment: h2539_2008.patch h2539_2008_0.20s.patch h2539_2008_0.20s.patch h2539_2008.patch added doAs and GETHOMEDIRECTORY. > Support doAs and GETHOMEDIRECTORY in webhdfs > > > Key: HDFS-2539 > URL: https://issues.apache.org/jira/browse/HDFS-2539 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2539_2008.patch, h2539_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-1257) Race condition on FSNamesystem#recentInvalidateSets introduced by HADOOP-5124
[ https://issues.apache.org/jira/browse/HDFS-1257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDFS-1257: --- Target Version/s: 0.20.205.1, 0.23.0 Fix Version/s: 0.20.205.1 Committed to 205.1 > Race condition on FSNamesystem#recentInvalidateSets introduced by HADOOP-5124 > - > > Key: HDFS-1257 > URL: https://issues.apache.org/jira/browse/HDFS-1257 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Affects Versions: 0.23.0 >Reporter: Ramkumar Vadali >Assignee: Eric Payne > Fix For: 0.20.205.1, 0.23.0 > > Attachments: HDFS-1257-branch-0.20-security.patch, > HDFS-1257.1.20110810.patch, HDFS-1257.2.20110812.patch, > HDFS-1257.3.20110815.patch, HDFS-1257.4.20110816.patch, > HDFS-1257.5.20110817.patch, HDFS-1257.patch > > > HADOOP-5124 provided some improvements to FSNamesystem#recentInvalidateSets. > But it introduced unprotected access to the data structure > recentInvalidateSets. Specifically, FSNamesystem.computeInvalidateWork > accesses recentInvalidateSets without read-lock protection. If there is > concurrent activity (like reducing replication on a file) that adds to > recentInvalidateSets, the name-node crashes with a > ConcurrentModificationException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly
[ https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDFS-2246: --- Attachment: HDFS-2246-branch-0.20-security-205.patch Patch for 205 branch. > Shortcut a local client reads to a Datanodes files directly > --- > > Key: HDFS-2246 > URL: https://issues.apache.org/jira/browse/HDFS-2246 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Sanjay Radia > Fix For: 0.20.205.1 > > Attachments: 0001-HDFS-347.-Local-reads.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security.3.patch, > HDFS-2246-branch-0.20-security.no-softref.patch, > HDFS-2246-branch-0.20-security.patch, HDFS-2246.20s.1.patch, > HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, HDFS-2246.20s.4.txt, > HDFS-2246.20s.patch, localReadShortcut20-security.2patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly
[ https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDFS-2246: --- Attachment: HDFS-2246-branch-0.20-security.no-softref.patch Uploaded another patch with soft reference removed. Rest of it is identical to previous patch. > Shortcut a local client reads to a Datanodes files directly > --- > > Key: HDFS-2246 > URL: https://issues.apache.org/jira/browse/HDFS-2246 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Sanjay Radia > Fix For: 0.20.205.1 > > Attachments: 0001-HDFS-347.-Local-reads.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security.3.patch, > HDFS-2246-branch-0.20-security.no-softref.patch, > HDFS-2246-branch-0.20-security.patch, HDFS-2246.20s.1.patch, > HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, HDFS-2246.20s.4.txt, > HDFS-2246.20s.patch, localReadShortcut20-security.2patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly
[ https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDFS-2246: --- Attachment: HDFS-2246-branch-0.20-security.3.patch Updated patch. Fix included to support multiple local datanodes. Addressed Nicholas's most of the comments, except removal of soft reference. > Shortcut a local client reads to a Datanodes files directly > --- > > Key: HDFS-2246 > URL: https://issues.apache.org/jira/browse/HDFS-2246 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Sanjay Radia > Fix For: 0.20.205.1 > > Attachments: 0001-HDFS-347.-Local-reads.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security.3.patch, HDFS-2246-branch-0.20-security.patch, > HDFS-2246.20s.1.patch, HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, > HDFS-2246.20s.4.txt, HDFS-2246.20s.patch, localReadShortcut20-security.2patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2316) webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP
[ https://issues.apache.org/jira/browse/HDFS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146646#comment-13146646 ] Tsz Wo (Nicholas), SZE commented on HDFS-2316: -- Regarding to #6, suppose webhdfs and hoop share the same FileSystem scheme. I think "http" as a FileSystem scheme is not an option. Otherwise, we cannot easily tell whether "http://host:port/path/to/file"; is a http URL or a FileSystem URI. > webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP > -- > > Key: HDFS-2316 > URL: https://issues.apache.org/jira/browse/HDFS-2316 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: WebHdfsAPI20111020.pdf, WebHdfsAPI2003.pdf > > > We current have hftp for accessing HDFS over HTTP. However, hftp is a > read-only FileSystem and does not provide "write" accesses. > In HDFS-2284, we propose to have webhdfs for providing a complete FileSystem > implementation for accessing HDFS over HTTP. The is the umbrella JIRA for > the tasks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-1257) Race condition on FSNamesystem#recentInvalidateSets introduced by HADOOP-5124
[ https://issues.apache.org/jira/browse/HDFS-1257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146645#comment-13146645 ] Tsz Wo (Nicholas), SZE commented on HDFS-1257: -- The only unsync access to recentInvalidateSets is InvalidateQueueProcessingStats.postCheckIsLastCycle(..). It is calling isEmpty() and so it not a problem. After looking at QueueProcessingStatistics a little bit, I suspect that there is a bug in counting cycles. The potential bug is not related the patch. +1 the 0.20s patch looks good. > Race condition on FSNamesystem#recentInvalidateSets introduced by HADOOP-5124 > - > > Key: HDFS-1257 > URL: https://issues.apache.org/jira/browse/HDFS-1257 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Affects Versions: 0.23.0 >Reporter: Ramkumar Vadali >Assignee: Eric Payne > Fix For: 0.23.0 > > Attachments: HDFS-1257-branch-0.20-security.patch, > HDFS-1257.1.20110810.patch, HDFS-1257.2.20110812.patch, > HDFS-1257.3.20110815.patch, HDFS-1257.4.20110816.patch, > HDFS-1257.5.20110817.patch, HDFS-1257.patch > > > HADOOP-5124 provided some improvements to FSNamesystem#recentInvalidateSets. > But it introduced unprotected access to the data structure > recentInvalidateSets. Specifically, FSNamesystem.computeInvalidateWork > accesses recentInvalidateSets without read-lock protection. If there is > concurrent activity (like reducing replication on a file) that adds to > recentInvalidateSets, the name-node crashes with a > ConcurrentModificationException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2417) Warnings about attempt to override final parameter while getting delegation token
[ https://issues.apache.org/jira/browse/HDFS-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HDFS-2417: --- Resolution: Duplicate Status: Resolved (was: Patch Available) Attached the patch for branch-0.20-security to HADOOP-7664 > Warnings about attempt to override final parameter while getting delegation > token > - > > Key: HDFS-2417 > URL: https://issues.apache.org/jira/browse/HDFS-2417 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Affects Versions: 0.20.205.0 >Reporter: Rajit Saha >Assignee: Ravi Prakash > Attachments: HDFS-2417.patch > > > I am seeing whenever I run any Mapreduce job and its trying to acquire > delegation from NN, In JT log following warnings coming about "a attempt to > override final parameter:" > The log snippet in JT log > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: > mapred.job.reuse.jvm.num.tasks; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: mapred.system.dir; > Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: > hadoop.job.history.user.location; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: mapred.local.dir; > Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: m > apred.job.tracker.http.address; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: d > fs.data.dir; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: d > fs.http.address; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: m > apreduce.admin.map.child.java.opts; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: > mapreduce.history.server.http.address; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: m > apreduce.history.server.embedded; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: m > apreduce.jobtracker.split.metainfo.maxsize; Ignoring.2011-10-07 20:29:19,096 > WARN > org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to > override final parameter: m > apreduce.admin.reduce.child.java.opts; Ignoring.2011-10-07 20:29:19,096 WARN > org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: h > adoop.tmp.dir; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: > mapred.jobtracker.maxtasks.per.job; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: > mapred.job.tracker; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: dfs.name.dir; > Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: m > apred.temp.dir; Ignoring.2011-10-07 20:29:19,103 INFO > org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: > registering token for renewal for service > =:50470 and jobID > = job_201110072015_0005 > 2011-10-07 20:29:19,103 INFO > org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: > registering token for > renewal for service =:8020 and jobID = job_20111007201
[jira] [Commented] (HDFS-2417) Warnings about attempt to override final parameter while getting delegation token
[ https://issues.apache.org/jira/browse/HDFS-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146582#comment-13146582 ] Tsz Wo (Nicholas), SZE commented on HDFS-2417: -- Is the patch the same as the one in HADOOP-7664 except that it is for 0.20? If yes, we should post the patch in HADOOP-7664. > Warnings about attempt to override final parameter while getting delegation > token > - > > Key: HDFS-2417 > URL: https://issues.apache.org/jira/browse/HDFS-2417 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Affects Versions: 0.20.205.0 >Reporter: Rajit Saha >Assignee: Ravi Prakash > Attachments: HDFS-2417.patch > > > I am seeing whenever I run any Mapreduce job and its trying to acquire > delegation from NN, In JT log following warnings coming about "a attempt to > override final parameter:" > The log snippet in JT log > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: > mapred.job.reuse.jvm.num.tasks; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: mapred.system.dir; > Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: > hadoop.job.history.user.location; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: mapred.local.dir; > Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: m > apred.job.tracker.http.address; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: d > fs.data.dir; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: d > fs.http.address; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: m > apreduce.admin.map.child.java.opts; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: > mapreduce.history.server.http.address; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: m > apreduce.history.server.embedded; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: m > apreduce.jobtracker.split.metainfo.maxsize; Ignoring.2011-10-07 20:29:19,096 > WARN > org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to > override final parameter: m > apreduce.admin.reduce.child.java.opts; Ignoring.2011-10-07 20:29:19,096 WARN > org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: h > adoop.tmp.dir; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: > mapred.jobtracker.maxtasks.per.job; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: > mapred.job.tracker; Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: dfs.name.dir; > Ignoring. > 2011-10-07 20:29:19,096 WARN org.apache.hadoop.conf.Configuration: > /tmp/mapred-local/jobTracker/job_201110072015_0005.xml:a attempt to override > final parameter: m > apred.temp.dir; Ignoring.2011-10-07 20:29:19,103 INFO > org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: > registering token for renewal for service > =:50470 and jobID > = job_201110072015_0005 > 2011-10-07 20:29:19,103 INFO > org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: > registeri
[jira] [Commented] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly
[ https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146549#comment-13146549 ] Jitendra Nath Pandey commented on HDFS-2246: This jira was intended for 205 (as pointed out by Dhruba in an earlier comment) but couldn't be completed within 205 deadline, therefore we are now shooting for 205.1. > Shortcut a local client reads to a Datanodes files directly > --- > > Key: HDFS-2246 > URL: https://issues.apache.org/jira/browse/HDFS-2246 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Sanjay Radia > Fix For: 0.20.205.1 > > Attachments: 0001-HDFS-347.-Local-reads.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security.patch, HDFS-2246.20s.1.patch, > HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, HDFS-2246.20s.4.txt, > HDFS-2246.20s.patch, localReadShortcut20-security.2patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2545) Webhdfs: Support multiple namenodes in federation
[ https://issues.apache.org/jira/browse/HDFS-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146542#comment-13146542 ] Aaron T. Myers commented on HDFS-2545: -- Great. Thanks for the clarification. > Webhdfs: Support multiple namenodes in federation > - > > Key: HDFS-2545 > URL: https://issues.apache.org/jira/browse/HDFS-2545 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > > DatanodeWebHdfsMethods only talks to the default namenode. It won't work if > there are multiple namenodes in federation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2540) Change WebHdfsFileSystem to two-step create/append
[ https://issues.apache.org/jira/browse/HDFS-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-2540: - Resolution: Fixed Fix Version/s: 0.23.1 0.24.0 0.23.0 0.20.206.0 0.20.205.1 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks Jitendra for the review. I have committed this. > Change WebHdfsFileSystem to two-step create/append > -- > > Key: HDFS-2540 > URL: https://issues.apache.org/jira/browse/HDFS-2540 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0, 0.23.1 > > Attachments: h2540_2007.patch, h2540_2007_0.20s.patch, > h2540_2008.patch, h2540_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2528) webhdfs rest call to a secure dn fails when a token is sent
[ https://issues.apache.org/jira/browse/HDFS-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-2528: - Fix Version/s: 0.23.0 Merged to 0.23.0. > webhdfs rest call to a secure dn fails when a token is sent > --- > > Key: HDFS-2528 > URL: https://issues.apache.org/jira/browse/HDFS-2528 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 0.20.205.0 >Reporter: Arpit Gupta >Assignee: Tsz Wo (Nicholas), SZE > Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0, 0.23.1 > > Attachments: h2528_2001.patch, h2528_2001_0.20s.patch, > h2528_2001b.patch, h2528_2001b_0.20s.patch, h2528_2002.patch, > h2528_2002_0.20s.patch, h2528_2003.patch, h2528_2003_0.20s.patch, > h2528_2003_0.20s.patch > > > curl -L -u : --negotiate -i > "http://NN:50070/webhdfs/v1/tmp/webhdfs_data/file_small_data.txt?op=OPEN"; > the following exception is thrown by the datanode when the redirect happens. > {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Call > to failed on local exception: java.io.IOException: > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)]"}} > Interestingly when using ./bin/hadoop with a webhdfs path we are able to cat > or tail a file successfully. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2527) Remove the use of Range header from webhdfs
[ https://issues.apache.org/jira/browse/HDFS-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-2527: - Fix Version/s: 0.23.0 Merged to 0.23.0. > Remove the use of Range header from webhdfs > --- > > Key: HDFS-2527 > URL: https://issues.apache.org/jira/browse/HDFS-2527 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0, 0.23.1 > > Attachments: h2527_2001b_0.20s.patch, h2527_2002.patch, > h2527_2002_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2416) distcp with a webhdfs uri on a secure cluster fails
[ https://issues.apache.org/jira/browse/HDFS-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-2416: - Merged to 0.23.0. > distcp with a webhdfs uri on a secure cluster fails > --- > > Key: HDFS-2416 > URL: https://issues.apache.org/jira/browse/HDFS-2416 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 0.20.205.0 >Reporter: Arpit Gupta >Assignee: Jitendra Nath Pandey > Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0 > > Attachments: HDFS-2416-branch-0.20-security.6.patch, > HDFS-2416-branch-0.20-security.7.patch, > HDFS-2416-branch-0.20-security.8.patch, HDFS-2416-branch-0.20-security.patch, > HDFS-2416-trunk.patch, HDFS-2416-trunk.patch, > HDFS-2419-branch-0.20-security.patch, HDFS-2419-branch-0.20-security.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2540) Change WebHdfsFileSystem to two-step create/append
[ https://issues.apache.org/jira/browse/HDFS-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146513#comment-13146513 ] Hudson commented on HDFS-2540: -- Integrated in Hadoop-Mapreduce-trunk-Commit #1280 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1280/]) HDFS-2540. Webhdfs: change "Expect: 100-continue" to two-step write; change "HdfsFileStatus" and "localName" respectively to "FileStatus" and "pathSuffix" in JSON response. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1199396 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java > Change WebHdfsFileSystem to two-step create/append > -- > > Key: HDFS-2540 > URL: https://issues.apache.org/jira/browse/HDFS-2540 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2540_2007.patch, h2540_2007_0.20s.patch, > h2540_2008.patch, h2540_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2545) Webhdfs: Support multiple namenodes in federation
[ https://issues.apache.org/jira/browse/HDFS-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-2545: - Description: DatanodeWebHdfsMethods only talks to the default namenode. It won't work if there are multiple namenodes in federation. Summary: Webhdfs: Support multiple namenodes in federation (was: Support multiple namenodes in webhdfs) > Webhdfs: Support multiple namenodes in federation > - > > Key: HDFS-2545 > URL: https://issues.apache.org/jira/browse/HDFS-2545 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > > DatanodeWebHdfsMethods only talks to the default namenode. It won't work if > there are multiple namenodes in federation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2545) Support multiple namenodes in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146509#comment-13146509 ] Tsz Wo (Nicholas), SZE commented on HDFS-2545: -- Hi Aaron, it is federated NNs. Separate clusters is currently supported. HA NNs should be automatically supported since HA should take care client fail over. > Support multiple namenodes in webhdfs > - > > Key: HDFS-2545 > URL: https://issues.apache.org/jira/browse/HDFS-2545 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2540) Change WebHdfsFileSystem to two-step create/append
[ https://issues.apache.org/jira/browse/HDFS-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146506#comment-13146506 ] Hudson commented on HDFS-2540: -- Integrated in Hadoop-Mapreduce-0.23-Commit #168 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/168/]) svn merge -c 1199396 from trunk for HDFS-2540. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1199403 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java > Change WebHdfsFileSystem to two-step create/append > -- > > Key: HDFS-2540 > URL: https://issues.apache.org/jira/browse/HDFS-2540 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2540_2007.patch, h2540_2007_0.20s.patch, > h2540_2008.patch, h2540_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2540) Change WebHdfsFileSystem to two-step create/append
[ https://issues.apache.org/jira/browse/HDFS-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146499#comment-13146499 ] Hudson commented on HDFS-2540: -- Integrated in Hadoop-Common-0.23-Commit #157 (See [https://builds.apache.org/job/Hadoop-Common-0.23-Commit/157/]) svn merge -c 1199396 from trunk for HDFS-2540. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1199403 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java > Change WebHdfsFileSystem to two-step create/append > -- > > Key: HDFS-2540 > URL: https://issues.apache.org/jira/browse/HDFS-2540 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2540_2007.patch, h2540_2007_0.20s.patch, > h2540_2008.patch, h2540_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2316) webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP
[ https://issues.apache.org/jira/browse/HDFS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146497#comment-13146497 ] Arpit Gupta commented on HDFS-2316: --- @Alejandro {quote} Again, I mean in a 'general way'. Having a syntax that is convenient for parsing using a specific library doesn't seem the right approach. {quote} I am not sure why the approach i suggested is not a general way. The current response we send allows users to create a dom object from the json response. If the root object is not present in that case the user would have to write specific code for different api calls and add the root object when needed. Thus i think what we have right now allows for the general way rather than specific solutions for different api calls. The benefit for having a response that can be converted to valid xml is that in future if we want to support xml response there is no schema change needed between xml and json. Also clients that are using java can use that java xpath libraries to parse the data. I am not sure if json has something as strong as xpath that one can use. Here you can see an example where a response has both json and xml responses yql call to get weather info xml -> http://goo.gl/i2Gii json -> http://goo.gl/osChW . So i believe our json response should be returning a root object. > webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP > -- > > Key: HDFS-2316 > URL: https://issues.apache.org/jira/browse/HDFS-2316 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: WebHdfsAPI20111020.pdf, WebHdfsAPI2003.pdf > > > We current have hftp for accessing HDFS over HTTP. However, hftp is a > read-only FileSystem and does not provide "write" accesses. > In HDFS-2284, we propose to have webhdfs for providing a complete FileSystem > implementation for accessing HDFS over HTTP. The is the umbrella JIRA for > the tasks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2540) Change WebHdfsFileSystem to two-step create/append
[ https://issues.apache.org/jira/browse/HDFS-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146493#comment-13146493 ] Hudson commented on HDFS-2540: -- Integrated in Hadoop-Hdfs-trunk-Commit #1332 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1332/]) HDFS-2540. Webhdfs: change "Expect: 100-continue" to two-step write; change "HdfsFileStatus" and "localName" respectively to "FileStatus" and "pathSuffix" in JSON response. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1199396 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java > Change WebHdfsFileSystem to two-step create/append > -- > > Key: HDFS-2540 > URL: https://issues.apache.org/jira/browse/HDFS-2540 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2540_2007.patch, h2540_2007_0.20s.patch, > h2540_2008.patch, h2540_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2540) Change WebHdfsFileSystem to two-step create/append
[ https://issues.apache.org/jira/browse/HDFS-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146496#comment-13146496 ] Hudson commented on HDFS-2540: -- Integrated in Hadoop-Common-trunk-Commit #1258 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1258/]) HDFS-2540. Webhdfs: change "Expect: 100-continue" to two-step write; change "HdfsFileStatus" and "localName" respectively to "FileStatus" and "pathSuffix" in JSON response. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1199396 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java > Change WebHdfsFileSystem to two-step create/append > -- > > Key: HDFS-2540 > URL: https://issues.apache.org/jira/browse/HDFS-2540 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2540_2007.patch, h2540_2007_0.20s.patch, > h2540_2008.patch, h2540_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2540) Change WebHdfsFileSystem to two-step create/append
[ https://issues.apache.org/jira/browse/HDFS-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146492#comment-13146492 ] Hudson commented on HDFS-2540: -- Integrated in Hadoop-Hdfs-0.23-Commit #156 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/156/]) svn merge -c 1199396 from trunk for HDFS-2540. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1199403 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java > Change WebHdfsFileSystem to two-step create/append > -- > > Key: HDFS-2540 > URL: https://issues.apache.org/jira/browse/HDFS-2540 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2540_2007.patch, h2540_2007_0.20s.patch, > h2540_2008.patch, h2540_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2540) Change WebHdfsFileSystem to two-step create/append
[ https://issues.apache.org/jira/browse/HDFS-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146481#comment-13146481 ] Jitendra Nath Pandey commented on HDFS-2540: +1 for the patch. > Change WebHdfsFileSystem to two-step create/append > -- > > Key: HDFS-2540 > URL: https://issues.apache.org/jira/browse/HDFS-2540 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2540_2007.patch, h2540_2007_0.20s.patch, > h2540_2008.patch, h2540_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2540) Change WebHdfsFileSystem to two-step create/append
[ https://issues.apache.org/jira/browse/HDFS-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146388#comment-13146388 ] Hadoop QA commented on HDFS-2540: - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12502920/h2540_2008.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.TestBalancerBandwidth +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/1544//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1544//console This message is automatically generated. > Change WebHdfsFileSystem to two-step create/append > -- > > Key: HDFS-2540 > URL: https://issues.apache.org/jira/browse/HDFS-2540 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2540_2007.patch, h2540_2007_0.20s.patch, > h2540_2008.patch, h2540_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-2540) Change WebHdfsFileSystem to two-step create/append
[ https://issues.apache.org/jira/browse/HDFS-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-2540: - Attachment: h2540_2008.patch h2540_2008_0.20s.patch h2540_2008_0.20s.patch h2540_2008.patch Added valueOf(..) in TemporaryRedirectOp. No new tests added since the existing tests already cover this. > Change WebHdfsFileSystem to two-step create/append > -- > > Key: HDFS-2540 > URL: https://issues.apache.org/jira/browse/HDFS-2540 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h2540_2007.patch, h2540_2007_0.20s.patch, > h2540_2008.patch, h2540_2008_0.20s.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2545) Support multiple namenodes in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146305#comment-13146305 ] Aaron T. Myers commented on HDFS-2545: -- Hey Nicholas, do you mean supporting multiple NNs in the sense of separate clusters, federated NNs, or HA NNs? > Support multiple namenodes in webhdfs > - > > Key: HDFS-2545 > URL: https://issues.apache.org/jira/browse/HDFS-2545 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2495) Increase granularity of write operations in ReplicationMonitor thus reducing contention for write lock
[ https://issues.apache.org/jira/browse/HDFS-2495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146301#comment-13146301 ] Hudson commented on HDFS-2495: -- Integrated in Hadoop-Mapreduce-trunk #891 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/891/]) HDFS-2495. Increase granularity of write operations in ReplicationMonitor thus reducing contention for write lock. Contributed by Tomasz Nykiel. hairong : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1199024 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java > Increase granularity of write operations in ReplicationMonitor thus reducing > contention for write lock > -- > > Key: HDFS-2495 > URL: https://issues.apache.org/jira/browse/HDFS-2495 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: name-node >Reporter: Tomasz Nykiel >Assignee: Tomasz Nykiel > Fix For: 0.24.0 > > Attachments: replicationMon.patch, replicationMon.patch-1 > > > For processing blocks in ReplicationMonitor > (BlockManager.computeReplicationWork), we first obtain a list of blocks to be > replicated by calling chooseUnderReplicatedBlocks, and then for each block > which was found, we call computeReplicationWorkForBlock. The latter processes > a block in three stages, acquiring the writelock twice per call: > 1. obtaining block related info (livenodes, srcnode, etc.) under lock > 2. choosing target for replication > 3. scheduling replication (under lock) > We would like to change this behaviour and decrease contention for the write > lock, by batching blocks and executing 1,2,3, for sets of blocks, rather than > for each one separately. This would decrease the number of writeLock to 2, > from 2*numberofblocks. > Also, the info level logging can be pushed outside the writelock. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2528) webhdfs rest call to a secure dn fails when a token is sent
[ https://issues.apache.org/jira/browse/HDFS-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146295#comment-13146295 ] Hudson commented on HDFS-2528: -- Integrated in Hadoop-Mapreduce-trunk #891 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/891/]) HDFS-2528. Webhdfs: set delegation kind to WEBHDFS and add a HDFS token when http requests are redirected to datanode. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198903 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenRenewer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/resources/DatanodeWebHdfsMethods.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java > webhdfs rest call to a secure dn fails when a token is sent > --- > > Key: HDFS-2528 > URL: https://issues.apache.org/jira/browse/HDFS-2528 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 0.20.205.0 >Reporter: Arpit Gupta >Assignee: Tsz Wo (Nicholas), SZE > Fix For: 0.20.205.1, 0.20.206.0, 0.24.0, 0.23.1 > > Attachments: h2528_2001.patch, h2528_2001_0.20s.patch, > h2528_2001b.patch, h2528_2001b_0.20s.patch, h2528_2002.patch, > h2528_2002_0.20s.patch, h2528_2003.patch, h2528_2003_0.20s.patch, > h2528_2003_0.20s.patch > > > curl -L -u : --negotiate -i > "http://NN:50070/webhdfs/v1/tmp/webhdfs_data/file_small_data.txt?op=OPEN"; > the following exception is thrown by the datanode when the redirect happens. > {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Call > to failed on local exception: java.io.IOException: > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)]"}} > Interestingly when using ./bin/hadoop with a webhdfs path we are able to cat > or tail a file successfully. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2495) Increase granularity of write operations in ReplicationMonitor thus reducing contention for write lock
[ https://issues.apache.org/jira/browse/HDFS-2495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146283#comment-13146283 ] Hudson commented on HDFS-2495: -- Integrated in Hadoop-Hdfs-trunk #857 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/857/]) HDFS-2495. Increase granularity of write operations in ReplicationMonitor thus reducing contention for write lock. Contributed by Tomasz Nykiel. hairong : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1199024 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java > Increase granularity of write operations in ReplicationMonitor thus reducing > contention for write lock > -- > > Key: HDFS-2495 > URL: https://issues.apache.org/jira/browse/HDFS-2495 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: name-node >Reporter: Tomasz Nykiel >Assignee: Tomasz Nykiel > Fix For: 0.24.0 > > Attachments: replicationMon.patch, replicationMon.patch-1 > > > For processing blocks in ReplicationMonitor > (BlockManager.computeReplicationWork), we first obtain a list of blocks to be > replicated by calling chooseUnderReplicatedBlocks, and then for each block > which was found, we call computeReplicationWorkForBlock. The latter processes > a block in three stages, acquiring the writelock twice per call: > 1. obtaining block related info (livenodes, srcnode, etc.) under lock > 2. choosing target for replication > 3. scheduling replication (under lock) > We would like to change this behaviour and decrease contention for the write > lock, by batching blocks and executing 1,2,3, for sets of blocks, rather than > for each one separately. This would decrease the number of writeLock to 2, > from 2*numberofblocks. > Also, the info level logging can be pushed outside the writelock. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2528) webhdfs rest call to a secure dn fails when a token is sent
[ https://issues.apache.org/jira/browse/HDFS-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146277#comment-13146277 ] Hudson commented on HDFS-2528: -- Integrated in Hadoop-Hdfs-trunk #857 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/857/]) HDFS-2528. Webhdfs: set delegation kind to WEBHDFS and add a HDFS token when http requests are redirected to datanode. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198903 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenRenewer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/resources/DatanodeWebHdfsMethods.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java > webhdfs rest call to a secure dn fails when a token is sent > --- > > Key: HDFS-2528 > URL: https://issues.apache.org/jira/browse/HDFS-2528 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 0.20.205.0 >Reporter: Arpit Gupta >Assignee: Tsz Wo (Nicholas), SZE > Fix For: 0.20.205.1, 0.20.206.0, 0.24.0, 0.23.1 > > Attachments: h2528_2001.patch, h2528_2001_0.20s.patch, > h2528_2001b.patch, h2528_2001b_0.20s.patch, h2528_2002.patch, > h2528_2002_0.20s.patch, h2528_2003.patch, h2528_2003_0.20s.patch, > h2528_2003_0.20s.patch > > > curl -L -u : --negotiate -i > "http://NN:50070/webhdfs/v1/tmp/webhdfs_data/file_small_data.txt?op=OPEN"; > the following exception is thrown by the datanode when the redirect happens. > {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Call > to failed on local exception: java.io.IOException: > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)]"}} > Interestingly when using ./bin/hadoop with a webhdfs path we are able to cat > or tail a file successfully. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly
[ https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146267#comment-13146267 ] Todd Lipcon commented on HDFS-2246: --- I'm curious about the fixversion for this... it would seem 0.20.205.1 (a maintenance dot-release of a maintenance/stable branch) isn't exactly the place for a big new feature. What am I missing? Will try to review this soon, but it's a busy week for many of us at a conference. > Shortcut a local client reads to a Datanodes files directly > --- > > Key: HDFS-2246 > URL: https://issues.apache.org/jira/browse/HDFS-2246 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Sanjay Radia > Fix For: 0.20.205.1 > > Attachments: 0001-HDFS-347.-Local-reads.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security-205.patch, > HDFS-2246-branch-0.20-security.patch, HDFS-2246.20s.1.patch, > HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, HDFS-2246.20s.4.txt, > HDFS-2246.20s.patch, localReadShortcut20-security.2patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2528) webhdfs rest call to a secure dn fails when a token is sent
[ https://issues.apache.org/jira/browse/HDFS-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146260#comment-13146260 ] Hudson commented on HDFS-2528: -- Integrated in Hadoop-Hdfs-0.23-Build #70 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/70/]) svn merge -c 1198903 from trunk for HDFS-2528. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198905 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenRenewer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/resources/DatanodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java > webhdfs rest call to a secure dn fails when a token is sent > --- > > Key: HDFS-2528 > URL: https://issues.apache.org/jira/browse/HDFS-2528 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 0.20.205.0 >Reporter: Arpit Gupta >Assignee: Tsz Wo (Nicholas), SZE > Fix For: 0.20.205.1, 0.20.206.0, 0.24.0, 0.23.1 > > Attachments: h2528_2001.patch, h2528_2001_0.20s.patch, > h2528_2001b.patch, h2528_2001b_0.20s.patch, h2528_2002.patch, > h2528_2002_0.20s.patch, h2528_2003.patch, h2528_2003_0.20s.patch, > h2528_2003_0.20s.patch > > > curl -L -u : --negotiate -i > "http://NN:50070/webhdfs/v1/tmp/webhdfs_data/file_small_data.txt?op=OPEN"; > the following exception is thrown by the datanode when the redirect happens. > {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Call > to failed on local exception: java.io.IOException: > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)]"}} > Interestingly when using ./bin/hadoop with a webhdfs path we are able to cat > or tail a file successfully. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2546) The C HDFS API should work with secure HDFS
The C HDFS API should work with secure HDFS --- Key: HDFS-2546 URL: https://issues.apache.org/jira/browse/HDFS-2546 Project: Hadoop HDFS Issue Type: New Feature Components: libhdfs Affects Versions: 0.24.0 Reporter: Harsh J Right now, the libhdfs will not work with Kerberos Hadoop. In case libhdfs is still being supported, it must fully work with Kerberized instances of HDFS. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2528) webhdfs rest call to a secure dn fails when a token is sent
[ https://issues.apache.org/jira/browse/HDFS-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13146151#comment-13146151 ] Hudson commented on HDFS-2528: -- Integrated in Hadoop-Mapreduce-0.23-Build #84 (See [https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/84/]) svn merge -c 1198903 from trunk for HDFS-2528. szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1198905 Files : * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenRenewer.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/resources/DatanodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java > webhdfs rest call to a secure dn fails when a token is sent > --- > > Key: HDFS-2528 > URL: https://issues.apache.org/jira/browse/HDFS-2528 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 0.20.205.0 >Reporter: Arpit Gupta >Assignee: Tsz Wo (Nicholas), SZE > Fix For: 0.20.205.1, 0.20.206.0, 0.24.0, 0.23.1 > > Attachments: h2528_2001.patch, h2528_2001_0.20s.patch, > h2528_2001b.patch, h2528_2001b_0.20s.patch, h2528_2002.patch, > h2528_2002_0.20s.patch, h2528_2003.patch, h2528_2003_0.20s.patch, > h2528_2003_0.20s.patch > > > curl -L -u : --negotiate -i > "http://NN:50070/webhdfs/v1/tmp/webhdfs_data/file_small_data.txt?op=OPEN"; > the following exception is thrown by the datanode when the redirect happens. > {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Call > to failed on local exception: java.io.IOException: > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)]"}} > Interestingly when using ./bin/hadoop with a webhdfs path we are able to cat > or tail a file successfully. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira