Re: Problem with start-all on 0.16.4

2008-05-22 Thread Jean-Adrien

Yeps, that's it. I interverted my config dir and I updated hadoop-site rather
than hadoop-default...

Thx a lot



Erwan Arzur-2 wrote:
> 
> Hey,
> 
> Did you update the config directory(ies) with defaults from the new
> release ?
> 
> config/hadoop-default.xml may have been modified with new settings
> that trigger this NPE if not present.
> 
> Erwan
> 
> On Wed, May 21, 2008 at 4:14 PM, Jean-Adrien <[EMAIL PROTECTED]> wrote:
>>
>> Hi
>>
>> Same problem for me. I tried to rm -rf the datastore as well (prior to
>> reformat) but no change. Any clue is welcome
>>
>> Regards
>>
>>
>>
>> Adam Wynne wrote:
>>>
>>> Hi,
>>>
>>> I have a working 0.15.3 install and am trying to upgrade to 0.16.4.  I
>>> want to start clean with an empty filesystem, so I just reformatted
>>> the filesystem instead of using the upgrade option.  When I run
>>> start-all.sh, I get a null pointer exception originating from the
>>> NetUtils.getServerAddress() method.  This cluster is on a private
>>> network, could there be a bug with the way hadoop is looking up the
>>> address?  Other ideas?
>>>
>>> Here is the full error and stack trace from the namenode log:
>>>
>>> 2008-05-14 08:03:37,252 INFO org.apache.hadoop.fs.FSNamesystem:
>>> fsOwner=qeadmin,qeadmin,wheel
>>> 2008-05-14 08:03:37,253 INFO org.apache.hadoop.fs.FSNamesystem:
>>> supergroup=supergroup
>>> 2008-05-14 08:03:37,253 INFO org.apache.hadoop.fs.FSNamesystem:
>>> isPermissionEnabled=true
>>> 2008-05-14 08:03:37,358 INFO org.apache.hadoop.fs.FSNamesystem:
>>> Finished loading FSImage in 137 msecs
>>> 2008-05-14 08:03:37,362 INFO org.apache.hadoop.fs.FSNamesystem:
>>> Leaving safemode after 142 msecs
>>> 2008-05-14 08:03:37,362 INFO org.apache.hadoop.dfs.StateChange: STATE*
>>> Network topology has 0 racks and 0 datanodes
>>> 2008-05-14 08:03:37,363 INFO org.apache.hadoop.dfs.StateChange: STATE*
>>> UnderReplicatedBlocks has 0 blocks
>>> 2008-05-14 08:03:37,377 INFO org.apache.hadoop.fs.FSNamesystem:
>>> Registered FSNamesystemStatusMBean
>>> 2008-05-14 08:03:37,398 ERROR org.apache.hadoop.dfs.NameNode:
>>> java.lang.NullPointerException
>>>at
>>> org.apache.hadoop.net.NetUtils.getServerAddress(NetUtils.java:148)
>>>at
>>> org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:279)
>>>at
>>> org.apache.hadoop.dfs.FSNamesystem.(FSNamesystem.java:235)
>>>at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:131)
>>>at org.apache.hadoop.dfs.NameNode.(NameNode.java:176)
>>>at org.apache.hadoop.dfs.NameNode.(NameNode.java:162)
>>>at
>>> org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:846)
>>>at org.apache.hadoop.dfs.NameNode.main(NameNode.java:855)
>>>
>>> 2008-05-14 08:03:37,399 INFO org.apache.hadoop.dfs.NameNode:
>>> SHUTDOWN_MSG:
>>> /
>>> SHUTDOWN_MSG: Shutting down NameNode at compute-0-0.local/192.168.1.254
>>> /
>>>
>>>
>>> Thanks
>>>
>>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Problem-with-start-all-on-0.16.4-tp17233437p17364262.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Problem-with-start-all-on-0.16.4-tp17233437p17399466.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: Problem with start-all on 0.16.4

2008-05-21 Thread Erwan Arzur
Hey,

Did you update the config directory(ies) with defaults from the new release ?

config/hadoop-default.xml may have been modified with new settings
that trigger this NPE if not present.

Erwan

On Wed, May 21, 2008 at 4:14 PM, Jean-Adrien <[EMAIL PROTECTED]> wrote:
>
> Hi
>
> Same problem for me. I tried to rm -rf the datastore as well (prior to
> reformat) but no change. Any clue is welcome
>
> Regards
>
>
>
> Adam Wynne wrote:
>>
>> Hi,
>>
>> I have a working 0.15.3 install and am trying to upgrade to 0.16.4.  I
>> want to start clean with an empty filesystem, so I just reformatted
>> the filesystem instead of using the upgrade option.  When I run
>> start-all.sh, I get a null pointer exception originating from the
>> NetUtils.getServerAddress() method.  This cluster is on a private
>> network, could there be a bug with the way hadoop is looking up the
>> address?  Other ideas?
>>
>> Here is the full error and stack trace from the namenode log:
>>
>> 2008-05-14 08:03:37,252 INFO org.apache.hadoop.fs.FSNamesystem:
>> fsOwner=qeadmin,qeadmin,wheel
>> 2008-05-14 08:03:37,253 INFO org.apache.hadoop.fs.FSNamesystem:
>> supergroup=supergroup
>> 2008-05-14 08:03:37,253 INFO org.apache.hadoop.fs.FSNamesystem:
>> isPermissionEnabled=true
>> 2008-05-14 08:03:37,358 INFO org.apache.hadoop.fs.FSNamesystem:
>> Finished loading FSImage in 137 msecs
>> 2008-05-14 08:03:37,362 INFO org.apache.hadoop.fs.FSNamesystem:
>> Leaving safemode after 142 msecs
>> 2008-05-14 08:03:37,362 INFO org.apache.hadoop.dfs.StateChange: STATE*
>> Network topology has 0 racks and 0 datanodes
>> 2008-05-14 08:03:37,363 INFO org.apache.hadoop.dfs.StateChange: STATE*
>> UnderReplicatedBlocks has 0 blocks
>> 2008-05-14 08:03:37,377 INFO org.apache.hadoop.fs.FSNamesystem:
>> Registered FSNamesystemStatusMBean
>> 2008-05-14 08:03:37,398 ERROR org.apache.hadoop.dfs.NameNode:
>> java.lang.NullPointerException
>>at
>> org.apache.hadoop.net.NetUtils.getServerAddress(NetUtils.java:148)
>>at
>> org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:279)
>>at org.apache.hadoop.dfs.FSNamesystem.(FSNamesystem.java:235)
>>at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:131)
>>at org.apache.hadoop.dfs.NameNode.(NameNode.java:176)
>>at org.apache.hadoop.dfs.NameNode.(NameNode.java:162)
>>at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:846)
>>at org.apache.hadoop.dfs.NameNode.main(NameNode.java:855)
>>
>> 2008-05-14 08:03:37,399 INFO org.apache.hadoop.dfs.NameNode: SHUTDOWN_MSG:
>> /************
>> SHUTDOWN_MSG: Shutting down NameNode at compute-0-0.local/192.168.1.254
>> /
>>
>>
>> Thanks
>>
>>
>
> --
> View this message in context: 
> http://www.nabble.com/Problem-with-start-all-on-0.16.4-tp17233437p17364262.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


Re: Problem with start-all on 0.16.4

2008-05-21 Thread Jean-Adrien

Hi

Same problem for me. I tried to rm -rf the datastore as well (prior to
reformat) but no change. Any clue is welcome

Regards



Adam Wynne wrote:
> 
> Hi,
> 
> I have a working 0.15.3 install and am trying to upgrade to 0.16.4.  I
> want to start clean with an empty filesystem, so I just reformatted
> the filesystem instead of using the upgrade option.  When I run
> start-all.sh, I get a null pointer exception originating from the
> NetUtils.getServerAddress() method.  This cluster is on a private
> network, could there be a bug with the way hadoop is looking up the
> address?  Other ideas?
> 
> Here is the full error and stack trace from the namenode log:
> 
> 2008-05-14 08:03:37,252 INFO org.apache.hadoop.fs.FSNamesystem:
> fsOwner=qeadmin,qeadmin,wheel
> 2008-05-14 08:03:37,253 INFO org.apache.hadoop.fs.FSNamesystem:
> supergroup=supergroup
> 2008-05-14 08:03:37,253 INFO org.apache.hadoop.fs.FSNamesystem:
> isPermissionEnabled=true
> 2008-05-14 08:03:37,358 INFO org.apache.hadoop.fs.FSNamesystem:
> Finished loading FSImage in 137 msecs
> 2008-05-14 08:03:37,362 INFO org.apache.hadoop.fs.FSNamesystem:
> Leaving safemode after 142 msecs
> 2008-05-14 08:03:37,362 INFO org.apache.hadoop.dfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2008-05-14 08:03:37,363 INFO org.apache.hadoop.dfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2008-05-14 08:03:37,377 INFO org.apache.hadoop.fs.FSNamesystem:
> Registered FSNamesystemStatusMBean
> 2008-05-14 08:03:37,398 ERROR org.apache.hadoop.dfs.NameNode:
> java.lang.NullPointerException
>at
> org.apache.hadoop.net.NetUtils.getServerAddress(NetUtils.java:148)
>at
> org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:279)
>at org.apache.hadoop.dfs.FSNamesystem.(FSNamesystem.java:235)
>at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:131)
>at org.apache.hadoop.dfs.NameNode.(NameNode.java:176)
>at org.apache.hadoop.dfs.NameNode.(NameNode.java:162)
>at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:846)
>at org.apache.hadoop.dfs.NameNode.main(NameNode.java:855)
> 
> 2008-05-14 08:03:37,399 INFO org.apache.hadoop.dfs.NameNode: SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down NameNode at compute-0-0.local/192.168.1.254
> ********************/
> 
> 
> Thanks
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Problem-with-start-all-on-0.16.4-tp17233437p17364262.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Problem with start-all on 0.16.4

2008-05-14 Thread chuckanut
Hi,

I have a working 0.15.3 install and am trying to upgrade to 0.16.4.  I
want to start clean with an empty filesystem, so I just reformatted
the filesystem instead of using the upgrade option.  When I run
start-all.sh, I get a null pointer exception originating from the
NetUtils.getServerAddress() method.  This cluster is on a private
network, could there be a bug with the way hadoop is looking up the
address?  Other ideas?

Here is the full error and stack trace from the namenode log:

2008-05-14 08:03:37,252 INFO org.apache.hadoop.fs.FSNamesystem:
fsOwner=qeadmin,qeadmin,wheel
2008-05-14 08:03:37,253 INFO org.apache.hadoop.fs.FSNamesystem:
supergroup=supergroup
2008-05-14 08:03:37,253 INFO org.apache.hadoop.fs.FSNamesystem:
isPermissionEnabled=true
2008-05-14 08:03:37,358 INFO org.apache.hadoop.fs.FSNamesystem:
Finished loading FSImage in 137 msecs
2008-05-14 08:03:37,362 INFO org.apache.hadoop.fs.FSNamesystem:
Leaving safemode after 142 msecs
2008-05-14 08:03:37,362 INFO org.apache.hadoop.dfs.StateChange: STATE*
Network topology has 0 racks and 0 datanodes
2008-05-14 08:03:37,363 INFO org.apache.hadoop.dfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2008-05-14 08:03:37,377 INFO org.apache.hadoop.fs.FSNamesystem:
Registered FSNamesystemStatusMBean
2008-05-14 08:03:37,398 ERROR org.apache.hadoop.dfs.NameNode:
java.lang.NullPointerException
   at org.apache.hadoop.net.NetUtils.getServerAddress(NetUtils.java:148)
   at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:279)
   at org.apache.hadoop.dfs.FSNamesystem.(FSNamesystem.java:235)
   at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:131)
   at org.apache.hadoop.dfs.NameNode.(NameNode.java:176)
   at org.apache.hadoop.dfs.NameNode.(NameNode.java:162)
   at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:846)
   at org.apache.hadoop.dfs.NameNode.main(NameNode.java:855)

2008-05-14 08:03:37,399 INFO org.apache.hadoop.dfs.NameNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at compute-0-0.local/192.168.1.254
/


Thanks


Problem with start-all on 0.16.4

2008-05-14 Thread Adam Wynne
Hi,

I have a working 0.15.3 install and am trying to upgrade to 0.16.4.  I
want to start clean with an empty filesystem, so I just reformatted
the filesystem instead of using the upgrade option.  When I run
start-all.sh, I get a null pointer exception originating from the
NetUtils.getServerAddress() method.  This cluster is on a private
network, could there be a bug with the way hadoop is looking up the
address?  Other ideas?

Here is the full error and stack trace from the namenode log:

2008-05-14 08:03:37,252 INFO org.apache.hadoop.fs.FSNamesystem:
fsOwner=qeadmin,qeadmin,wheel
2008-05-14 08:03:37,253 INFO org.apache.hadoop.fs.FSNamesystem:
supergroup=supergroup
2008-05-14 08:03:37,253 INFO org.apache.hadoop.fs.FSNamesystem:
isPermissionEnabled=true
2008-05-14 08:03:37,358 INFO org.apache.hadoop.fs.FSNamesystem:
Finished loading FSImage in 137 msecs
2008-05-14 08:03:37,362 INFO org.apache.hadoop.fs.FSNamesystem:
Leaving safemode after 142 msecs
2008-05-14 08:03:37,362 INFO org.apache.hadoop.dfs.StateChange: STATE*
Network topology has 0 racks and 0 datanodes
2008-05-14 08:03:37,363 INFO org.apache.hadoop.dfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2008-05-14 08:03:37,377 INFO org.apache.hadoop.fs.FSNamesystem:
Registered FSNamesystemStatusMBean
2008-05-14 08:03:37,398 ERROR org.apache.hadoop.dfs.NameNode:
java.lang.NullPointerException
at org.apache.hadoop.net.NetUtils.getServerAddress(NetUtils.java:148)
at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:279)
at org.apache.hadoop.dfs.FSNamesystem.(FSNamesystem.java:235)
at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:131)
at org.apache.hadoop.dfs.NameNode.(NameNode.java:176)
at org.apache.hadoop.dfs.NameNode.(NameNode.java:162)
at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:846)
at org.apache.hadoop.dfs.NameNode.main(NameNode.java:855)

2008-05-14 08:03:37,399 INFO org.apache.hadoop.dfs.NameNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at compute-0-0.local/192.168.1.254
/


Thanks