Re: Bad connection to FS.

2009-02-05 Thread Rasit OZDAS
I can add a little method to follow namenode failures,
I find out such problems by running first   start-all.sh , then  stop-all.sh
if namenode starts without error, stop-all.sh gives the output
"stopping namenode.." , but in case of an error, it says "no namenode
to stop.."
In case of an error, Hadoop log directory is always the first place to look.
It doesn't save the day, but worths noting.

Hope this helps,
Rasit

2009/2/5 lohit :
> As noted by others NameNode is not running.
> Before formatting anything (which is like deleting your data), try to see why 
> NameNode isnt running.
> search for value of HADOOP_LOG_DIR in ./conf/hadoop-env.sh if you have not 
> set it explicitly it would default to  installation>/logs/*namenode*.log
> Lohit
>
>
>
> - Original Message 
> From: Amandeep Khurana 
> To: core-user@hadoop.apache.org
> Sent: Wednesday, February 4, 2009 5:26:43 PM
> Subject: Re: Bad connection to FS.
>
> Here's what I had done..
>
> 1. Stop the whole system
> 2. Delete all the data in the directories where the data and the metadata is
> being stored.
> 3. Format the namenode
> 4. Start the system
>
> This solved my problem. I'm not sure if this is a good idea to do for you or
> not. I was pretty much installing from scratch so didnt mind deleting the
> files in those directories..
>
> Amandeep
>
>
> Amandeep Khurana
> Computer Science Graduate Student
> University of California, Santa Cruz
>
>
> On Wed, Feb 4, 2009 at 3:49 PM, TCK  wrote:
>
>>
>> I believe the debug logs location is still specified in hadoop-env.sh (I
>> just read the 0.19.0 doc). I think you have to shut down all nodes first
>> (stop-all), then format the namenode, and then restart (start-all) and make
>> sure that NameNode comes up too. We are using a very old version, 0.12.3,
>> and are upgrading.
>> -TCK
>>
>>
>>
>> --- On Wed, 2/4/09, Mithila Nagendra  wrote:
>> From: Mithila Nagendra 
>> Subject: Re: Bad connection to FS.
>> To: core-user@hadoop.apache.org, moonwatcher32...@yahoo.com
>> Date: Wednesday, February 4, 2009, 6:30 PM
>>
>> @TCK: Which version of hadoop have you installed?
>> @Amandeep: I did tried reformatting the namenode, but it hasn't helped me
>> out in anyway.
>> Mithila
>>
>>
>> On Wed, Feb 4, 2009 at 4:18 PM, TCK  wrote:
>>
>>
>>
>> Mithila, how come there is no NameNode java process listed by your jps
>> command? I would check the hadoop namenode logs to see if there was some
>> startup problem (the location of those logs would be specified in
>> hadoop-env.sh, at least in the version I'm using).
>>
>>
>> -TCK
>>
>>
>>
>>
>>
>>
>>
>> --- On Wed, 2/4/09, Mithila Nagendra  wrote:
>>
>> From: Mithila Nagendra 
>>
>> Subject: Bad connection to FS.
>>
>> To: "core-user@hadoop.apache.org" , "
>> core-user-subscr...@hadoop.apache.org" <
>> core-user-subscr...@hadoop.apache.org>
>>
>>
>> Date: Wednesday, February 4, 2009, 6:06 PM
>>
>>
>>
>> Hey all
>>
>>
>>
>> When I try to copy a folder from the local file system in to the HDFS using
>>
>> the command hadoop dfs -copyFromLocal, the copy fails and it gives an error
>>
>> which says "Bad connection to FS". How do I get past this? The
>>
>> following is
>>
>> the output at the time of execution:
>>
>>
>>
>> had...@renweiyu-desktop:/usr/local/hadoop$ jps
>>
>> 6873 Jps
>>
>> 6299 JobTracker
>>
>> 6029 DataNode
>>
>> 6430 TaskTracker
>>
>> 6189 SecondaryNameNode
>>
>> had...@renweiyu-desktop:/usr/local/hadoop$ ls
>>
>> bin  docslib  README.txt
>>
>> build.xmlhadoop-0.18.3-ant.jar   libhdfs  src
>>
>> c++  hadoop-0.18.3-core.jar  librecordio  webapps
>>
>> CHANGES.txt  hadoop-0.18.3-examples.jar  LICENSE.txt
>>
>> conf hadoop-0.18.3-test.jar  logs
>>
>> contrib  hadoop-0.18.3-tools.jar NOTICE.txt
>>
>> had...@renweiyu-desktop:/usr/local/hadoop$ cd ..
>>
>> had...@renweiyu-desktop:/usr/local$ ls
>>
>> bin  etc  games  gutenberg  hadoop  hadoop-0.18.3.tar.gz  hadoop-datastore
>>
>> include  lib  man  sbin  share  src
>>
>> had...@renweiyu-desktop:/usr/local$ hadoop/bin/hadoop dfs -copyFromLocal
>>
>> gutenberg gutenberg
>>
>> 09/02/04

Re: Bad connection to FS.

2009-02-04 Thread lohit
As noted by others NameNode is not running.
Before formatting anything (which is like deleting your data), try to see why 
NameNode isnt running.
search for value of HADOOP_LOG_DIR in ./conf/hadoop-env.sh if you have not set 
it explicitly it would default to /logs/*namenode*.log
Lohit



- Original Message 
From: Amandeep Khurana 
To: core-user@hadoop.apache.org
Sent: Wednesday, February 4, 2009 5:26:43 PM
Subject: Re: Bad connection to FS.

Here's what I had done..

1. Stop the whole system
2. Delete all the data in the directories where the data and the metadata is
being stored.
3. Format the namenode
4. Start the system

This solved my problem. I'm not sure if this is a good idea to do for you or
not. I was pretty much installing from scratch so didnt mind deleting the
files in those directories..

Amandeep


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz


On Wed, Feb 4, 2009 at 3:49 PM, TCK  wrote:

>
> I believe the debug logs location is still specified in hadoop-env.sh (I
> just read the 0.19.0 doc). I think you have to shut down all nodes first
> (stop-all), then format the namenode, and then restart (start-all) and make
> sure that NameNode comes up too. We are using a very old version, 0.12.3,
> and are upgrading.
> -TCK
>
>
>
> --- On Wed, 2/4/09, Mithila Nagendra  wrote:
> From: Mithila Nagendra 
> Subject: Re: Bad connection to FS.
> To: core-user@hadoop.apache.org, moonwatcher32...@yahoo.com
> Date: Wednesday, February 4, 2009, 6:30 PM
>
> @TCK: Which version of hadoop have you installed?
> @Amandeep: I did tried reformatting the namenode, but it hasn't helped me
> out in anyway.
> Mithila
>
>
> On Wed, Feb 4, 2009 at 4:18 PM, TCK  wrote:
>
>
>
> Mithila, how come there is no NameNode java process listed by your jps
> command? I would check the hadoop namenode logs to see if there was some
> startup problem (the location of those logs would be specified in
> hadoop-env.sh, at least in the version I'm using).
>
>
> -TCK
>
>
>
>
>
>
>
> --- On Wed, 2/4/09, Mithila Nagendra  wrote:
>
> From: Mithila Nagendra 
>
> Subject: Bad connection to FS.
>
> To: "core-user@hadoop.apache.org" , "
> core-user-subscr...@hadoop.apache.org" <
> core-user-subscr...@hadoop.apache.org>
>
>
> Date: Wednesday, February 4, 2009, 6:06 PM
>
>
>
> Hey all
>
>
>
> When I try to copy a folder from the local file system in to the HDFS using
>
> the command hadoop dfs -copyFromLocal, the copy fails and it gives an error
>
> which says "Bad connection to FS". How do I get past this? The
>
> following is
>
> the output at the time of execution:
>
>
>
> had...@renweiyu-desktop:/usr/local/hadoop$ jps
>
> 6873 Jps
>
> 6299 JobTracker
>
> 6029 DataNode
>
> 6430 TaskTracker
>
> 6189 SecondaryNameNode
>
> had...@renweiyu-desktop:/usr/local/hadoop$ ls
>
> bin  docslib  README.txt
>
> build.xmlhadoop-0.18.3-ant.jar   libhdfs  src
>
> c++  hadoop-0.18.3-core.jar  librecordio  webapps
>
> CHANGES.txt  hadoop-0.18.3-examples.jar  LICENSE.txt
>
> conf hadoop-0.18.3-test.jar  logs
>
> contrib  hadoop-0.18.3-tools.jar NOTICE.txt
>
> had...@renweiyu-desktop:/usr/local/hadoop$ cd ..
>
> had...@renweiyu-desktop:/usr/local$ ls
>
> bin  etc  games  gutenberg  hadoop  hadoop-0.18.3.tar.gz  hadoop-datastore
>
> include  lib  man  sbin  share  src
>
> had...@renweiyu-desktop:/usr/local$ hadoop/bin/hadoop dfs -copyFromLocal
>
> gutenberg gutenberg
>
> 09/02/04 15:58:21 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 0 time(s).
>
> 09/02/04 15:58:22 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 1 time(s).
>
> 09/02/04 15:58:23 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 2 time(s).
>
> 09/02/04 15:58:24 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 3 time(s).
>
> 09/02/04 15:58:25 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 4 time(s).
>
> 09/02/04 15:58:26 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 5 time(s).
>
> 09/02/04 15:58:27 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 6 time(s).
>
> 09/02/04 15:58:28 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 7 time(s).
>
> 09/02/04 15:58:29 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 8 time(s).
>
> 09/02/04 15:58:30 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 9 time(s).
>
> Bad connection to FS. command aborted.
>
>
>
> The commmand jps shows that the hadoop system s up and running. So I have
> no
>
> idea whats wrong!
>
>
>
> Thanks!
>
> Mithila
>
>
>
>
>
>
>
>
>
>
>
>
>
>



Re: Bad connection to FS.

2009-02-04 Thread Amandeep Khurana
Here's what I had done..

1. Stop the whole system
2. Delete all the data in the directories where the data and the metadata is
being stored.
3. Format the namenode
4. Start the system

This solved my problem. I'm not sure if this is a good idea to do for you or
not. I was pretty much installing from scratch so didnt mind deleting the
files in those directories..

Amandeep


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz


On Wed, Feb 4, 2009 at 3:49 PM, TCK  wrote:

>
> I believe the debug logs location is still specified in hadoop-env.sh (I
> just read the 0.19.0 doc). I think you have to shut down all nodes first
> (stop-all), then format the namenode, and then restart (start-all) and make
> sure that NameNode comes up too. We are using a very old version, 0.12.3,
> and are upgrading.
> -TCK
>
>
>
> --- On Wed, 2/4/09, Mithila Nagendra  wrote:
> From: Mithila Nagendra 
> Subject: Re: Bad connection to FS.
> To: core-user@hadoop.apache.org, moonwatcher32...@yahoo.com
> Date: Wednesday, February 4, 2009, 6:30 PM
>
> @TCK: Which version of hadoop have you installed?
> @Amandeep: I did tried reformatting the namenode, but it hasn't helped me
> out in anyway.
> Mithila
>
>
> On Wed, Feb 4, 2009 at 4:18 PM, TCK  wrote:
>
>
>
> Mithila, how come there is no NameNode java process listed by your jps
> command? I would check the hadoop namenode logs to see if there was some
> startup problem (the location of those logs would be specified in
> hadoop-env.sh, at least in the version I'm using).
>
>
> -TCK
>
>
>
>
>
>
>
> --- On Wed, 2/4/09, Mithila Nagendra  wrote:
>
> From: Mithila Nagendra 
>
> Subject: Bad connection to FS.
>
> To: "core-user@hadoop.apache.org" , "
> core-user-subscr...@hadoop.apache.org" <
> core-user-subscr...@hadoop.apache.org>
>
>
> Date: Wednesday, February 4, 2009, 6:06 PM
>
>
>
> Hey all
>
>
>
> When I try to copy a folder from the local file system in to the HDFS using
>
> the command hadoop dfs -copyFromLocal, the copy fails and it gives an error
>
> which says "Bad connection to FS". How do I get past this? The
>
> following is
>
> the output at the time of execution:
>
>
>
> had...@renweiyu-desktop:/usr/local/hadoop$ jps
>
> 6873 Jps
>
> 6299 JobTracker
>
> 6029 DataNode
>
> 6430 TaskTracker
>
> 6189 SecondaryNameNode
>
> had...@renweiyu-desktop:/usr/local/hadoop$ ls
>
> bin  docslib  README.txt
>
> build.xmlhadoop-0.18.3-ant.jar   libhdfs  src
>
> c++  hadoop-0.18.3-core.jar  librecordio  webapps
>
> CHANGES.txt  hadoop-0.18.3-examples.jar  LICENSE.txt
>
> conf hadoop-0.18.3-test.jar  logs
>
> contrib  hadoop-0.18.3-tools.jar NOTICE.txt
>
> had...@renweiyu-desktop:/usr/local/hadoop$ cd ..
>
> had...@renweiyu-desktop:/usr/local$ ls
>
> bin  etc  games  gutenberg  hadoop  hadoop-0.18.3.tar.gz  hadoop-datastore
>
> include  lib  man  sbin  share  src
>
> had...@renweiyu-desktop:/usr/local$ hadoop/bin/hadoop dfs -copyFromLocal
>
> gutenberg gutenberg
>
> 09/02/04 15:58:21 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 0 time(s).
>
> 09/02/04 15:58:22 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 1 time(s).
>
> 09/02/04 15:58:23 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 2 time(s).
>
> 09/02/04 15:58:24 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 3 time(s).
>
> 09/02/04 15:58:25 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 4 time(s).
>
> 09/02/04 15:58:26 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 5 time(s).
>
> 09/02/04 15:58:27 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 6 time(s).
>
> 09/02/04 15:58:28 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 7 time(s).
>
> 09/02/04 15:58:29 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 8 time(s).
>
> 09/02/04 15:58:30 INFO ipc.Client: Retrying connect to server: localhost/
>
> 127.0.0.1:54310. Already tried 9 time(s).
>
> Bad connection to FS. command aborted.
>
>
>
> The commmand jps shows that the hadoop system s up and running. So I have
> no
>
> idea whats wrong!
>
>
>
> Thanks!
>
> Mithila
>
>
>
>
>
>
>
>
>
>
>
>
>
>


Re: Bad connection to FS.

2009-02-04 Thread TCK

I believe the debug logs location is still specified in hadoop-env.sh (I just 
read the 0.19.0 doc). I think you have to shut down all nodes first (stop-all), 
then format the namenode, and then restart (start-all) and make sure that 
NameNode comes up too. We are using a very old version, 0.12.3, and are 
upgrading.
-TCK



--- On Wed, 2/4/09, Mithila Nagendra  wrote:
From: Mithila Nagendra 
Subject: Re: Bad connection to FS.
To: core-user@hadoop.apache.org, moonwatcher32...@yahoo.com
Date: Wednesday, February 4, 2009, 6:30 PM

@TCK: Which version of hadoop have you installed?
@Amandeep: I did tried reformatting the namenode, but it hasn't helped me out 
in anyway.
Mithila


On Wed, Feb 4, 2009 at 4:18 PM, TCK  wrote:



Mithila, how come there is no NameNode java process listed by your jps command? 
I would check the hadoop namenode logs to see if there was some startup problem 
(the location of those logs would be specified in hadoop-env.sh, at least in 
the version I'm using).


-TCK







--- On Wed, 2/4/09, Mithila Nagendra  wrote:

From: Mithila Nagendra 

Subject: Bad connection to FS.

To: "core-user@hadoop.apache.org" , 
"core-user-subscr...@hadoop.apache.org" 


Date: Wednesday, February 4, 2009, 6:06 PM



Hey all



When I try to copy a folder from the local file system in to the HDFS using

the command hadoop dfs -copyFromLocal, the copy fails and it gives an error

which says "Bad connection to FS". How do I get past this? The

following is

the output at the time of execution:



had...@renweiyu-desktop:/usr/local/hadoop$ jps

6873 Jps

6299 JobTracker

6029 DataNode

6430 TaskTracker

6189 SecondaryNameNode

had...@renweiyu-desktop:/usr/local/hadoop$ ls

bin          docs                        lib          README.txt

build.xml    hadoop-0.18.3-ant.jar       libhdfs      src

c++          hadoop-0.18.3-core.jar      librecordio  webapps

CHANGES.txt  hadoop-0.18.3-examples.jar  LICENSE.txt

conf         hadoop-0.18.3-test.jar      logs

contrib      hadoop-0.18.3-tools.jar     NOTICE.txt

had...@renweiyu-desktop:/usr/local/hadoop$ cd ..

had...@renweiyu-desktop:/usr/local$ ls

bin  etc  games  gutenberg  hadoop  hadoop-0.18.3.tar.gz  hadoop-datastore

include  lib  man  sbin  share  src

had...@renweiyu-desktop:/usr/local$ hadoop/bin/hadoop dfs -copyFromLocal

gutenberg gutenberg

09/02/04 15:58:21 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 0 time(s).

09/02/04 15:58:22 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 1 time(s).

09/02/04 15:58:23 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 2 time(s).

09/02/04 15:58:24 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 3 time(s).

09/02/04 15:58:25 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 4 time(s).

09/02/04 15:58:26 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 5 time(s).

09/02/04 15:58:27 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 6 time(s).

09/02/04 15:58:28 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 7 time(s).

09/02/04 15:58:29 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 8 time(s).

09/02/04 15:58:30 INFO ipc.Client: Retrying connect to server: localhost/

127.0.0.1:54310. Already tried 9 time(s).

Bad connection to FS. command aborted.



The commmand jps shows that the hadoop system s up and running. So I have no

idea whats wrong!



Thanks!

Mithila







      




  

Re: Bad connection to FS.

2009-02-04 Thread Amandeep Khurana
I faced the same issue a few days back. Formatting the namenode made it work
for me.

Amandeep


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz


On Wed, Feb 4, 2009 at 3:06 PM, Mithila Nagendra  wrote:

> Hey all
>
> When I try to copy a folder from the local file system in to the HDFS using
> the command hadoop dfs -copyFromLocal, the copy fails and it gives an error
> which says "Bad connection to FS". How do I get past this? The following is
> the output at the time of execution:
>
> had...@renweiyu-desktop:/usr/local/hadoop$ jps
> 6873 Jps
> 6299 JobTracker
> 6029 DataNode
> 6430 TaskTracker
> 6189 SecondaryNameNode
> had...@renweiyu-desktop:/usr/local/hadoop$ ls
> bin  docslib  README.txt
> build.xmlhadoop-0.18.3-ant.jar   libhdfs  src
> c++  hadoop-0.18.3-core.jar  librecordio  webapps
> CHANGES.txt  hadoop-0.18.3-examples.jar  LICENSE.txt
> conf hadoop-0.18.3-test.jar  logs
> contrib  hadoop-0.18.3-tools.jar NOTICE.txt
> had...@renweiyu-desktop:/usr/local/hadoop$ cd ..
> had...@renweiyu-desktop:/usr/local$ ls
> bin  etc  games  gutenberg  hadoop  hadoop-0.18.3.tar.gz  hadoop-datastore
> include  lib  man  sbin  share  src
> had...@renweiyu-desktop:/usr/local$ hadoop/bin/hadoop dfs -copyFromLocal
> gutenberg gutenberg
> 09/02/04 15:58:21 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 0 time(s).
> 09/02/04 15:58:22 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 1 time(s).
> 09/02/04 15:58:23 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 2 time(s).
> 09/02/04 15:58:24 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 3 time(s).
> 09/02/04 15:58:25 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 4 time(s).
> 09/02/04 15:58:26 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 5 time(s).
> 09/02/04 15:58:27 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 6 time(s).
> 09/02/04 15:58:28 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 7 time(s).
> 09/02/04 15:58:29 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 8 time(s).
> 09/02/04 15:58:30 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 9 time(s).
> Bad connection to FS. command aborted.
>
> The commmand jps shows that the hadoop system s up and running. So I have
> no
> idea whats wrong!
>
> Thanks!
> Mithila
>


Re: Bad connection to FS.

2009-02-04 Thread TCK

Mithila, how come there is no NameNode java process listed by your jps command? 
I would check the hadoop namenode logs to see if there was some startup problem 
(the location of those logs would be specified in hadoop-env.sh, at least in 
the version I'm using).
-TCK



--- On Wed, 2/4/09, Mithila Nagendra  wrote:
From: Mithila Nagendra 
Subject: Bad connection to FS.
To: "core-user@hadoop.apache.org" , 
"core-user-subscr...@hadoop.apache.org" 
Date: Wednesday, February 4, 2009, 6:06 PM

Hey all

When I try to copy a folder from the local file system in to the HDFS using
the command hadoop dfs -copyFromLocal, the copy fails and it gives an error
which says "Bad connection to FS". How do I get past this? The
following is
the output at the time of execution:

had...@renweiyu-desktop:/usr/local/hadoop$ jps
6873 Jps
6299 JobTracker
6029 DataNode
6430 TaskTracker
6189 SecondaryNameNode
had...@renweiyu-desktop:/usr/local/hadoop$ ls
bin  docslib  README.txt
build.xmlhadoop-0.18.3-ant.jar   libhdfs  src
c++  hadoop-0.18.3-core.jar  librecordio  webapps
CHANGES.txt  hadoop-0.18.3-examples.jar  LICENSE.txt
conf hadoop-0.18.3-test.jar  logs
contrib  hadoop-0.18.3-tools.jar NOTICE.txt
had...@renweiyu-desktop:/usr/local/hadoop$ cd ..
had...@renweiyu-desktop:/usr/local$ ls
bin  etc  games  gutenberg  hadoop  hadoop-0.18.3.tar.gz  hadoop-datastore
include  lib  man  sbin  share  src
had...@renweiyu-desktop:/usr/local$ hadoop/bin/hadoop dfs -copyFromLocal
gutenberg gutenberg
09/02/04 15:58:21 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 0 time(s).
09/02/04 15:58:22 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 1 time(s).
09/02/04 15:58:23 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 2 time(s).
09/02/04 15:58:24 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 3 time(s).
09/02/04 15:58:25 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 4 time(s).
09/02/04 15:58:26 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 5 time(s).
09/02/04 15:58:27 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 6 time(s).
09/02/04 15:58:28 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 7 time(s).
09/02/04 15:58:29 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 8 time(s).
09/02/04 15:58:30 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 9 time(s).
Bad connection to FS. command aborted.

The commmand jps shows that the hadoop system s up and running. So I have no
idea whats wrong!

Thanks!
Mithila



  

Bad connection to FS.

2009-02-04 Thread Mithila Nagendra
Hey all

When I try to copy a folder from the local file system in to the HDFS using
the command hadoop dfs -copyFromLocal, the copy fails and it gives an error
which says "Bad connection to FS". How do I get past this? The following is
the output at the time of execution:

had...@renweiyu-desktop:/usr/local/hadoop$ jps
6873 Jps
6299 JobTracker
6029 DataNode
6430 TaskTracker
6189 SecondaryNameNode
had...@renweiyu-desktop:/usr/local/hadoop$ ls
bin  docslib  README.txt
build.xmlhadoop-0.18.3-ant.jar   libhdfs  src
c++  hadoop-0.18.3-core.jar  librecordio  webapps
CHANGES.txt  hadoop-0.18.3-examples.jar  LICENSE.txt
conf hadoop-0.18.3-test.jar  logs
contrib  hadoop-0.18.3-tools.jar NOTICE.txt
had...@renweiyu-desktop:/usr/local/hadoop$ cd ..
had...@renweiyu-desktop:/usr/local$ ls
bin  etc  games  gutenberg  hadoop  hadoop-0.18.3.tar.gz  hadoop-datastore
include  lib  man  sbin  share  src
had...@renweiyu-desktop:/usr/local$ hadoop/bin/hadoop dfs -copyFromLocal
gutenberg gutenberg
09/02/04 15:58:21 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 0 time(s).
09/02/04 15:58:22 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 1 time(s).
09/02/04 15:58:23 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 2 time(s).
09/02/04 15:58:24 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 3 time(s).
09/02/04 15:58:25 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 4 time(s).
09/02/04 15:58:26 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 5 time(s).
09/02/04 15:58:27 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 6 time(s).
09/02/04 15:58:28 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 7 time(s).
09/02/04 15:58:29 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 8 time(s).
09/02/04 15:58:30 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 9 time(s).
Bad connection to FS. command aborted.

The commmand jps shows that the hadoop system s up and running. So I have no
idea whats wrong!

Thanks!
Mithila


Re: Bad connection to FS. command aborted.

2008-12-04 Thread elangovan anbalahan
Please tell me why am i getting this error.
it is becoming hard for me to find a solution

*put: java.io.IOException: failed to create file
/user/nutch/urls/urls/.urllist.txt.crc on client 127.0.0.1 because
target-length is 0, below MIN_REPLICATION (1)*

i am getting this when i do
bin/hadoop dfs -put urls urs


bash-3.2$ bin/start-all.sh
starting namenode, logging to
/nutch/search/logs/hadoop-nutch-namenode-elan.out
localhost: starting datanode, logging to
/nutch/search/logs/hadoop-nutch-datanode-elan.out
cat: /nutch/search/bin/../conf/masters: No such file or directory
starting jobtracker, logging to
/nutch/search/logs/hadoop-nutch-jobtracker-elan.out
localhost: starting tasktracker, logging to
/nutch/search/logs/hadoop-nutch-tasktracker-elan.out
bash-3.2$ mkdir urls
bash-3.2$ vi urls/urllist.txt
bash-3.2$ bin/hadoop dfs -put urls urls
put: java.io.IOException: failed to create file
/user/nutch/urls/.urllist.txt.crc on client 127.0.0.1 because target-length
is 0, below MIN_REPLICATION (1)
bash-3.2$ bin/hadoop dfs -put urls urls
put: java.io.IOException: failed to create file
/user/nutch/urls/urls/.urllist.txt.crc on client 127.0.0.1 because
target-length is 0, below MIN_REPLICATION (1)



On Thu, Dec 4, 2008 at 2:10 PM, elangovan anbalahan
<[EMAIL PROTECTED]>wrote:

> Hadoop 0.12.2
>
>
>
> On Thu, Dec 4, 2008 at 1:54 PM, Sagar Naik <[EMAIL PROTECTED]> wrote:
>
>> hadoop version ?
>> command : bin/hadoop version
>>
>> -Sagar
>>
>>
>>
>> elangovan anbalahan wrote:
>>
>>> i tried that but nothing happened
>>>
>>> bash-3.2$ bin/hadoop dfs -put urll urll
>>> put: java.io.IOException: failed to create file
>>> /user/nutch/urll/.urls.crc
>>> on client 192.168.1.6 because target-length is 0, below MIN_REPLICATION
>>> (1)
>>> bash-3.2$ bin/hadoop dfs -cat urls/part-0* > urls
>>> bash-3.2$ bin/hadoop dfs -ls urls
>>> Found 0 items
>>> bash-3.2$ bin/hadoop dfs -ls urll
>>> Found 0 items
>>> bash-3.2$ bin/hadoop dfs -ls
>>> Found 2 items
>>> /user/nutch/$
>>> /user/nutch/urll
>>>
>>>
>>> how do i get rid of the foll error:
>>> *put: java.io.IOException: failed to create file
>>> /user/nutch/urll/.urls.crc
>>> on client 192.168.1.6 because target-length is 0, below MIN_REPLICATION
>>> (1)
>>>
>>>
>>> *
>>> On Thu, Dec 4, 2008 at 1:29 PM, Elia Mazzawi
>>> <[EMAIL PROTECTED]>wrote:
>>>
>>>
>>>
 you didn't say what the error was?

 but you can try this it should do the same thing

 bin/hadoop dfs -cat urls/part-0* > urls


 elangovan anbalahan wrote:



> im getting this error message when i am dong
>
> *bash-3.2$ bin/hadoop dfs -put urls urls*
>
>
> please lemme know the resolution, i have a project submission in a few
> hours
>
>
>
>
>


>>>
>>>
>>>
>>
>>
>


Re: Bad connection to FS. command aborted.

2008-12-04 Thread elangovan anbalahan
Hadoop 0.12.2


On Thu, Dec 4, 2008 at 1:54 PM, Sagar Naik <[EMAIL PROTECTED]> wrote:

> hadoop version ?
> command : bin/hadoop version
>
> -Sagar
>
>
>
> elangovan anbalahan wrote:
>
>> i tried that but nothing happened
>>
>> bash-3.2$ bin/hadoop dfs -put urll urll
>> put: java.io.IOException: failed to create file /user/nutch/urll/.urls.crc
>> on client 192.168.1.6 because target-length is 0, below MIN_REPLICATION
>> (1)
>> bash-3.2$ bin/hadoop dfs -cat urls/part-0* > urls
>> bash-3.2$ bin/hadoop dfs -ls urls
>> Found 0 items
>> bash-3.2$ bin/hadoop dfs -ls urll
>> Found 0 items
>> bash-3.2$ bin/hadoop dfs -ls
>> Found 2 items
>> /user/nutch/$
>> /user/nutch/urll
>>
>>
>> how do i get rid of the foll error:
>> *put: java.io.IOException: failed to create file
>> /user/nutch/urll/.urls.crc
>> on client 192.168.1.6 because target-length is 0, below MIN_REPLICATION
>> (1)
>>
>>
>> *
>> On Thu, Dec 4, 2008 at 1:29 PM, Elia Mazzawi
>> <[EMAIL PROTECTED]>wrote:
>>
>>
>>
>>> you didn't say what the error was?
>>>
>>> but you can try this it should do the same thing
>>>
>>> bin/hadoop dfs -cat urls/part-0* > urls
>>>
>>>
>>> elangovan anbalahan wrote:
>>>
>>>
>>>
 im getting this error message when i am dong

 *bash-3.2$ bin/hadoop dfs -put urls urls*


 please lemme know the resolution, i have a project submission in a few
 hours





>>>
>>>
>>
>>
>>
>
>


Re: Bad connection to FS. command aborted.

2008-12-04 Thread Sagar Naik

hadoop version ?
command : bin/hadoop version

-Sagar


elangovan anbalahan wrote:

i tried that but nothing happened

bash-3.2$ bin/hadoop dfs -put urll urll
put: java.io.IOException: failed to create file /user/nutch/urll/.urls.crc
on client 192.168.1.6 because target-length is 0, below MIN_REPLICATION (1)
bash-3.2$ bin/hadoop dfs -cat urls/part-0* > urls
bash-3.2$ bin/hadoop dfs -ls urls
Found 0 items
bash-3.2$ bin/hadoop dfs -ls urll
Found 0 items
bash-3.2$ bin/hadoop dfs -ls
Found 2 items
/user/nutch/$
/user/nutch/urll


how do i get rid of the foll error:
*put: java.io.IOException: failed to create file /user/nutch/urll/.urls.crc
on client 192.168.1.6 because target-length is 0, below MIN_REPLICATION (1)


*
On Thu, Dec 4, 2008 at 1:29 PM, Elia Mazzawi
<[EMAIL PROTECTED]>wrote:

  

you didn't say what the error was?

but you can try this it should do the same thing

bin/hadoop dfs -cat urls/part-0* > urls


elangovan anbalahan wrote:



im getting this error message when i am dong

*bash-3.2$ bin/hadoop dfs -put urls urls*


please lemme know the resolution, i have a project submission in a few
hours



  



  




Re: Bad connection to FS. command aborted.

2008-12-04 Thread elangovan anbalahan
namenode is running.
but i did not understand what should i check in classpath ?

On Thu, Dec 4, 2008 at 1:34 PM, Sagar Naik <[EMAIL PROTECTED]> wrote:

> Check u r conf in the classpath.
> Check if Namenode is running
> U r not able to connect to the intended Namenode
>
> -Sagar
>
> elangovan anbalahan wrote:
>
>> im getting this error message when i am dong
>>
>> *bash-3.2$ bin/hadoop dfs -put urls urls*
>>
>>
>> please lemme know the resolution, i have a project submission in a few
>> hours
>>
>>
>>
>
>


Re: Bad connection to FS. command aborted.

2008-12-04 Thread elangovan anbalahan
i tried that but nothing happened

bash-3.2$ bin/hadoop dfs -put urll urll
put: java.io.IOException: failed to create file /user/nutch/urll/.urls.crc
on client 192.168.1.6 because target-length is 0, below MIN_REPLICATION (1)
bash-3.2$ bin/hadoop dfs -cat urls/part-0* > urls
bash-3.2$ bin/hadoop dfs -ls urls
Found 0 items
bash-3.2$ bin/hadoop dfs -ls urll
Found 0 items
bash-3.2$ bin/hadoop dfs -ls
Found 2 items
/user/nutch/$
/user/nutch/urll


how do i get rid of the foll error:
*put: java.io.IOException: failed to create file /user/nutch/urll/.urls.crc
on client 192.168.1.6 because target-length is 0, below MIN_REPLICATION (1)


*
On Thu, Dec 4, 2008 at 1:29 PM, Elia Mazzawi
<[EMAIL PROTECTED]>wrote:

>
> you didn't say what the error was?
>
> but you can try this it should do the same thing
>
> bin/hadoop dfs -cat urls/part-0* > urls
>
>
> elangovan anbalahan wrote:
>
>> im getting this error message when i am dong
>>
>> *bash-3.2$ bin/hadoop dfs -put urls urls*
>>
>>
>> please lemme know the resolution, i have a project submission in a few
>> hours
>>
>>
>>
>
>


Re: Bad connection to FS. command aborted.

2008-12-04 Thread Sagar Naik

Check u r conf in the classpath.
Check if Namenode is running
U r not able to connect to the intended Namenode

-Sagar
elangovan anbalahan wrote:

im getting this error message when i am dong

*bash-3.2$ bin/hadoop dfs -put urls urls*


please lemme know the resolution, i have a project submission in a few hours

  




Re: Bad connection to FS. command aborted.

2008-12-04 Thread Elia Mazzawi


you didn't say what the error was?

but you can try this it should do the same thing

bin/hadoop dfs -cat urls/part-0* > urls

elangovan anbalahan wrote:

im getting this error message when i am dong

*bash-3.2$ bin/hadoop dfs -put urls urls*


please lemme know the resolution, i have a project submission in a few hours

  




Bad connection to FS. command aborted.

2008-12-04 Thread elangovan anbalahan
im getting this error message when i am dong

*bash-3.2$ bin/hadoop dfs -put urls urls*


please lemme know the resolution, i have a project submission in a few hours