wrong fs error

2012-02-07 Thread Xiaomeng Wan
Hi,
I got the following error when running some pig script,

Error initializing attempt_201201031543_0083_m_00_1:
java.lang.IllegalArgumentException: Wrong FS:
hdfs://10.2.0.135:54310/app/datastore/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201201031543_0083/job.xml,
expected: hdfs://dev-hadoop-01.***.com:54310
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:410)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:106)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:162)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJobConfFile(TaskTracker.java:1280)
at 
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1174)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1098)
at 
org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2271)
at 
org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2235)

It seems some ips in config files need to be changed to hostnames. Any hints?

Shawn


Re: wrong fs error

2012-02-07 Thread Rohit
Hi Shawn, 

Look at core-site.xml config file and look for the fs.default.name property. 
Here you will specify the host name and port of the HDFS NameNode. 


Rohit Bakhshi





www.hortonworks.com (http://www.hortonworks.com/)





On Tuesday, February 7, 2012 at 3:01 PM, Xiaomeng Wan wrote:

 Hi,
 I got the following error when running some pig script,
 
 Error initializing attempt_201201031543_0083_m_00_1:
 java.lang.IllegalArgumentException: Wrong FS:
 hdfs://10.2.0.135:54310/app/datastore/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201201031543_0083/job.xml,
 expected: hdfs://dev-hadoop-01.***.com:54310
 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:410)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:106)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:162)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
 at 
 org.apache.hadoop.mapred.TaskTracker.localizeJobConfFile(TaskTracker.java:1280)
 at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1174)
 at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1098)
 at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2271)
 at 
 org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2235)
 
 It seems some ips in config files need to be changed to hostnames. Any hints?
 
 Shawn 



Re: wrong fs error

2012-02-07 Thread Harsh J
Xiaomeng,

Yep, you should not use IPs when configuring your fs.default.name.
Resetting that to use a hostname that resolves properly instead, will
fix this up.

On Wed, Feb 8, 2012 at 4:31 AM, Xiaomeng Wan shawn...@gmail.com wrote:
 Hi,
 I got the following error when running some pig script,

 Error initializing attempt_201201031543_0083_m_00_1:
 java.lang.IllegalArgumentException: Wrong FS:
 hdfs://10.2.0.135:54310/app/datastore/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201201031543_0083/job.xml,
 expected: hdfs://dev-hadoop-01.***.com:54310
        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:410)
        at 
 org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:106)
        at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:162)
        at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
        at 
 org.apache.hadoop.mapred.TaskTracker.localizeJobConfFile(TaskTracker.java:1280)
        at 
 org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1174)
        at 
 org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1098)
        at 
 org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2271)
        at 
 org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2235)

 It seems some ips in config files need to be changed to hostnames. Any hints?

 Shawn



-- 
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about


Re: Wrong FS

2010-02-24 Thread Marc Farnum Rendino
On Tue, Feb 23, 2010 at 9:38 AM, Edson Ramiro erlfi...@gmail.com wrote:

 Thanks Marc and Bill

 I solved this Wrong FS problem editing the /etc/hosts as Marc said.

 Now, the cluster is working ok  : ]


Great; 'preciate the confirmation!

- Marc


Re: Wrong FS

2010-02-23 Thread Edson Ramiro
Thanks Marc and Bill

I solved this Wrong FS problem editing the /etc/hosts as Marc said.

Now, the cluster is working ok  : ]

master01
127.0.0.1 localhost.localdomain localhost
10.0.0.101 master01
10.0.0.102 master02
10.0.0.200 slave00
10.0.0.201 slave01

master02
127.0.0.1 localhost.localdomain localhost
10.0.0.101 master01
10.0.0.102 master02
10.0.0.200 slave00
10.0.0.201 slave01

slave00
127.0.0.1 slave00 localhost.localdomain localhost
10.0.0.101 master01
10.0.0.102 master02
10.0.0.201 slave01

slave01
127.0.0.1 slave01 localhost.localdomain localhost
10.0.0.101 master01
10.0.0.102 master02
10.0.0.200 slave00


Edson Ramiro


On 22 February 2010 17:56, Marc Farnum Rendino mvg...@gmail.com wrote:

 Perhaps an /etc/hosts file is sufficient.

 However, FWIW, I didn't get it working til I moved to using all the real
 FQDNs.

 - Marc



RE: Wrong FS

2010-02-22 Thread Bill Habermaas
This problem has been around for a long time. Hadoop picks up the local host
name for the namenode and it will be used in all URI checks.  You cannot mix
IP and host addresses. This is especially a problem on solaris and aix
systems where I ran into it.  You don't need to setup DNS, just use the
hostname in your URIs. I did some patches for this for 0.18 but have not
redone them for 0.20. 

Bill

-Original Message-
From: Edson Ramiro [mailto:erlfi...@gmail.com] 
Sent: Monday, February 22, 2010 8:18 AM
To: common-user@hadoop.apache.org
Subject: Wrong FS

Hi all,

I'm getting this error

[had...@master01 hadoop-0.20.1 ]$ ./bin/hadoop jar
hadoop-0.20.1-examples.jar pi 1 1
Number of Maps  = 1
Samples per Map = 1
Wrote input for Map #0
Starting Job
java.lang.IllegalArgumentException: Wrong FS: hdfs://
10.0.0.101:9000/system/job_201002221311_0001, expected: hdfs://master01:9000

[...]

Do I need to set up a DNS ?

All my nodes are ok and the NameNode isn't in safe mode.

Any Idea?

Thanks in Advance.

Edson Ramiro




Re: Wrong FS

2010-02-22 Thread Edson Ramiro
But, when I use the hostname the nodes doesn't find the masters.

Where should I use the hostnames?

and, I don't need to setup DNS, but if I do, does it solve the problem?

Edson Ramiro


On 22 February 2010 10:56, Bill Habermaas b...@habermaas.us wrote:

 This problem has been around for a long time. Hadoop picks up the local
 host
 name for the namenode and it will be used in all URI checks.  You cannot
 mix
 IP and host addresses. This is especially a problem on solaris and aix
 systems where I ran into it.  You don't need to setup DNS, just use the
 hostname in your URIs. I did some patches for this for 0.18 but have not
 redone them for 0.20.

 Bill

 -Original Message-
 From: Edson Ramiro [mailto:erlfi...@gmail.com]
 Sent: Monday, February 22, 2010 8:18 AM
 To: common-user@hadoop.apache.org
 Subject: Wrong FS

 Hi all,

 I'm getting this error

 [had...@master01 hadoop-0.20.1 ]$ ./bin/hadoop jar
 hadoop-0.20.1-examples.jar pi 1 1
 Number of Maps  = 1
 Samples per Map = 1
 Wrote input for Map #0
 Starting Job
 java.lang.IllegalArgumentException: Wrong FS: hdfs://
 10.0.0.101:9000/system/job_201002221311_0001, expected:
 hdfs://master01:9000

 [...]

 Do I need to set up a DNS ?

 All my nodes are ok and the NameNode isn't in safe mode.

 Any Idea?

 Thanks in Advance.

 Edson Ramiro





Re: Wrong FS

2010-02-22 Thread Marc Farnum Rendino
Perhaps an /etc/hosts file is sufficient.

However, FWIW, I didn't get it working til I moved to using all the real
FQDNs.

- Marc