[ 
https://issues.apache.org/jira/browse/HDFS-8572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15028757#comment-15028757
 ] 

Brahma Reddy Battula commented on HDFS-8572:
--------------------------------------------

HI [~wheat9]

while committing this patch,do you remove following purposefully in branch-2..? 

{code}
-    this.infoServer.addInternalServlet(null, "/streamFile/*", 
StreamFile.class);
-    this.infoServer.addInternalServlet(null, "/getFileChecksum/*",
-        FileChecksumServlets.GetServlet.class);
{code}

I think ,distcp will fail when we use hftp since it expect streamFile and throw 
following error..am I correct..? if you agree with me,would like raise separate 
jira and add back...

{noformat}
Caused by: 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: 
java.io.IOException: HTTP_OK expected, received 404
at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.getInputStream(RetriableFileCopyCommand.java:302)
at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:247)
at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:183)
at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:123)
at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
... 11 more
Caused by: java.io.IOException: HTTP_OK expected, received 404
at 
org.apache.hadoop.hdfs.web.HftpFileSystem$RangeHeaderUrlOpener.connect(HftpFileSystem.java:367)
at 
org.apache.hadoop.hdfs.web.ByteRangeInputStream.openInputStream(ByteRangeInputStream.java:133)
at 
org.apache.hadoop.hdfs.web.ByteRangeInputStream.getInputStream(ByteRangeInputStream.java:115)
at 
org.apache.hadoop.hdfs.web.ByteRangeInputStream.<init>(ByteRangeInputStream.java:100)
at 
org.apache.hadoop.hdfs.web.HftpFileSystem$RangeHeaderInputStream.<init>(HftpFileSystem.java:376)
at 
org.apache.hadoop.hdfs.web.HftpFileSystem$RangeHeaderInputStream.<init>(HftpFileSystem.java:381)
at org.apache.hadoop.hdfs.web.HftpFileSystem.open(HftpFileSystem.java:397)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)
{noformat}

> DN always uses HTTP/localhost@REALM principals in SPNEGO
> --------------------------------------------------------
>
>                 Key: HDFS-8572
>                 URL: https://issues.apache.org/jira/browse/HDFS-8572
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Haohui Mai
>            Assignee: Haohui Mai
>            Priority: Blocker
>             Fix For: 2.7.1
>
>         Attachments: HDFS-8572.000.patch, HDFS-8572.001.patch, 
> HDFS-8572.002.patch
>
>
> In HDFS-7279 the Netty server in DN proxies all servlet requests to the local 
> Jetty instance.
> The Jetty server is configured incorrectly so that it always uses 
> {{HTTP/locahost@REALM}} to authenticate spnego requests. As a result, 
> servlets like JMX are no longer accessible in secure deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to