[ https://issues.apache.org/jira/browse/HDFS-2450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13136207#comment-13136207 ]
Suresh Srinivas commented on HDFS-2450: --------------------------------------- Comments: # FileSystem#checkPath #* Unrelated this patch - can you get thatScheme before making {{if (uri.getScheme() == null}} check. This avoids getting scheme twice from the URI. #* Earlier the check was based on thisAuthority using the port that the file system had. Now it with getCanonicalUri(), it uses default port. Is this correct? All checks seem to be using defaultPort. #* Why remove? {noformat} if (thisAuthority == thatAuthority || // & authorities match (thisAuthority != null && thisAuthority.equalsIgnoreCase(thatAuthority))) return; try { // or the default fs's uri defaultUri = get(getConf()).getUri(); } catch (IOException e) { throw new RuntimeException(e); } {noformat} #* The new method does not work for some cases like the previous one. If thisAuthority and thatAuthority is null, the new mehod throw IllegalArgumentException. This is due to removing an if condition after scheme check. #* If thisAuthority and thatAuthority are not null, you end up parsing the URI for thatAuthority twice. This is due to removing an if condition after scheme check. # DistributedFileSystem#makeQualified remove makes me concerned. I have not gone through all the cases to make sure there is no problem with that change. Can you please describe tests you run? > Only complete hostname is supported to access data via hdfs:// > -------------------------------------------------------------- > > Key: HDFS-2450 > URL: https://issues.apache.org/jira/browse/HDFS-2450 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 0.20.205.0 > Reporter: Rajit Saha > Assignee: Daryn Sharp > Attachments: HDFS-2450-1.patch, HDFS-2450-2.patch, HDFS-2450.patch, > IP vs. Hostname.pdf > > > If my complete hostname is host1.abc.xyz.com, only complete hostname must be > used to access data via hdfs:// > I am running following in .20.205 Client to get data from .20.205 NN (host1) > $hadoop dfs -copyFromLocal /etc/passwd hdfs://host1/tmp > copyFromLocal: Wrong FS: hdfs://host1/tmp, expected: hdfs://host1.abc.xyz.com > Usage: java FsShell [-copyFromLocal <localsrc> ... <dst>] > $hadoop dfs -copyFromLocal /etc/passwd hdfs://host1.abc/tmp/ > copyFromLocal: Wrong FS: hdfs://host1.blue/tmp/1, expected: > hdfs://host1.abc.xyz.com > Usage: java FsShell [-copyFromLocal <localsrc> ... <dst>] > $hadoop dfs -copyFromLocal /etc/passwd hftp://host1.abc.xyz/tmp/ > copyFromLocal: Wrong FS: hdfs://host1.blue/tmp/1, expected: > hdfs://host1.abc.xyz.com > Usage: java FsShell [-copyFromLocal <localsrc> ... <dst>] > Only following is supported > $hadoop dfs -copyFromLocal /etc/passwd hdfs://host1.abc.xyz.com/tmp/ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira