p.s.
Also, while starting dfs using bin/start-dfs.sh, I get the following error:
2011-04-13 09:42:31,729 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = inti84.cse.psu.edu/130.203.58.212
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
/
2011-04-13 09:42:31,853 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.NullPointerException
at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:175)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2011-04-13 09:42:31,854 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at inti84.cse.psu.edu/130.203.58.212
/
2011-04-13 09:44:03,265 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = inti84.cse.psu.edu/130.203.58.212
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
/
2011-04-13 09:44:03,384 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.NullPointerException
at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:175)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
On Wed, Apr 13, 2011 at 9:20 AM, bikash sharma wrote:
> Hi,
> I need to install hadoop on 16-node cluster. I have a couple of related
> questions:
> 1. I have installed hadoop on a shared directory, i.e., there is just one
> place where the whole hadoop installation files exist and all the 16 nodes
> use the same installation.
> Is that an issue or I need to install hadoop on each of these nodes in
> their local directory separately?
> 2. I installed hadoop-0.21 and after following the installation
> instructions, when i tried formatting, I get the following error:
>
> /
> Re-format filesystem in /var/tmp/data/dfs/name ? (Y or N) Y
> 11/04/13 09:16:23 INFO namenode.FSNamesystem: defaultReplication = 3
> 11/04/13 09:16:23 INFO namenode.FSNamesystem: maxReplication = 512
> 11/04/13 09:16:23 INFO namenode.FSNamesystem: minReplication = 1
> 11/04/13 09:16:23 INFO namenode.FSNamesystem: maxReplicationStreams = 2
> 11/04/13 09:16:23 INFO namenode.FSNamesystem: shouldCheckForEnoughRacks =
> false
> 11/04/13 09:16:23 INFO security.Groups: Group mapping
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> cacheTimeout=30
> 11/04/13 09:16:23 INFO namenode.FSNamesystem: fsOwner=bus145
> 11/04/13 09:16:23 INFO namenode.FSNamesystem: supergroup=supergroup
> 11/04/13 09:16:23 INFO namenode.FSNamesystem: isPermissionEnabled=true
> 11/04/13 09:16:23 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 11/04/13 09:16:24 INFO common.Storage: Cannot lock storage
> /var/tmp/data/dfs/name. The directory is already locked.
> 11/04/13 09:16:24 ERROR namenode.NameNode: java.io.IOException: Cannot lock
> storage /var/tmp/data/dfs/name. The directory is already locked.
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:617)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1426)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1444)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.j