Ayon, Do not think that is a bug. It is merely confused at the URL you provided it. The 'dfs' carries a FS object that was initialized with the hostname string, while you tried to provide it an IP which it promptly rejected (it does not resolve).
The 'fs' is generic in nature and attempts to create new FS objects from a given URL, and hence did not know of the hostname from your configuration. On 06-Dec-2011, at 10:59 AM, Ayon Sinha wrote: > This is on a EMR cluster. > > This does not work! > hadoop@ip-10-34-7-51:~$ hadoop dfs -mkdir hdfs://10.34.7.51:9000/user/foobar > mkdir: This file system object (hdfs://ip-10-34-7-51.ec2.internal:9000) does > not support access to the request path 'hdfs://10.34.7.51:9000/user/foobar' > You possibly called FileSystem.get(conf) when you should have called > FileSystem.get(uri, conf) to obtain a file system supporting your path. > Usage: java FsShell [-mkdir <path>] > > This works! > hadoop@ip-10-34-7-51:~$ hadoop dfs -fs hdfs://10.34.7.51:9000 -mkdir > /user/foobar > > > -Ayon > See My Photos on Flickr > Also check out my Blog for answers to commonly asked questions.