RE: ERROR namenode.NameNode: java.io.IOException: Cannot remove current directory: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/current

2012-02-01 Thread Uma Maheswara Rao G
can you try delete this directory manually?
Please check other process already running with this directory configured.

Regards,
Uma

From: Vijayakumar Ramdoss [nellaivi...@gmail.com]
Sent: Thursday, February 02, 2012 1:27 AM
To: common-user@hadoop.apache.org
Subject: ERROR namenode.NameNode: java.io.IOException: Cannot remove current 
directory: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/current

Hi All,
I am trying to start the Namenode from my machine, Its throwing the error
message,
 *ERROR namenode.NameNode:
java.io.IOException: Cannot remove current directory:
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/current*

Please refer the log information from here,
vijayram@ubuntu:/etc$ hadoop namenode -format
12/02/01 14:07:48 INFO namenode.NameNode: STARTUP_MSG:
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2-cdh3u3
STARTUP_MSG:   build =
file:///data/1/tmp/nightly_2012-01-26_09-40-25_3/hadoop-0.20-0.20.2+923.194-1~squeeze
-r 03b655719d13929bd68bb2c2f9cee615b389cea9; compiled by 'root' on Thu Jan
26 11:54:44 PST 2012
/
Re-format filesystem in /var/lib/hadoop-0.20/cache/hadoop/dfs/name ? (Y or
N) Y
12/02/01 14:08:10 INFO util.GSet: VM type   = 64-bit
12/02/01 14:08:10 INFO util.GSet: 2% max memory = 17.77875 MB
12/02/01 14:08:10 INFO util.GSet: capacity  = 2^21 = 2097152 entries
12/02/01 14:08:10 INFO util.GSet: recommended=2097152, actual=2097152
12/02/01 14:08:10 INFO security.UserGroupInformation: JAAS Configuration
already set up for Hadoop, not re-installing.
12/02/01 14:08:10 INFO namenode.FSNamesystem: fsOwner=vijayram (auth:SIMPLE)
12/02/01 14:08:10 INFO namenode.FSNamesystem: supergroup=supergroup
12/02/01 14:08:10 INFO namenode.FSNamesystem: isPermissionEnabled=false
12/02/01 14:08:10 INFO namenode.FSNamesystem:
dfs.block.invalidate.limit=1000
12/02/01 14:08:10 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/02/01 14:08:10 ERROR namenode.NameNode: java.io.IOException: Cannot
remove current directory: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/current
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:292)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1246)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1265)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1127)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1244)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1260)


Thanks and Regards
Vijay

nellaivi...@gmail.com


Re: ERROR namenode.NameNode: java.io.IOException: Cannot remove current directory: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/current

2012-02-01 Thread Harsh J
Vijay,

[Moving to cdh-u...@cloudera.org |
https://groups.google.com/a/cloudera.org/group/cdh-user/topics since
this is CDH3 specific]

You need to run that command as the 'hdfs' user, since the specific
dirs are writable only by the group 'hadoop':
$ sudo -u hdfs hadoop namenode -format

On Thu, Feb 2, 2012 at 1:27 AM, Vijayakumar Ramdoss
 wrote:
> Hi All,
> I am trying to start the Namenode from my machine, Its throwing the error
> message,
>                                 *ERROR namenode.NameNode:
> java.io.IOException: Cannot remove current directory:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/current*
>
> Please refer the log information from here,
> vijayram@ubuntu:/etc$ hadoop namenode -format
> 12/02/01 14:07:48 INFO namenode.NameNode: STARTUP_MSG:
> /
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = ubuntu/127.0.1.1
> STARTUP_MSG:   args = [-format]
> STARTUP_MSG:   version = 0.20.2-cdh3u3
> STARTUP_MSG:   build =
> file:///data/1/tmp/nightly_2012-01-26_09-40-25_3/hadoop-0.20-0.20.2+923.194-1~squeeze
> -r 03b655719d13929bd68bb2c2f9cee615b389cea9; compiled by 'root' on Thu Jan
> 26 11:54:44 PST 2012
> /
> Re-format filesystem in /var/lib/hadoop-0.20/cache/hadoop/dfs/name ? (Y or
> N) Y
> 12/02/01 14:08:10 INFO util.GSet: VM type       = 64-bit
> 12/02/01 14:08:10 INFO util.GSet: 2% max memory = 17.77875 MB
> 12/02/01 14:08:10 INFO util.GSet: capacity      = 2^21 = 2097152 entries
> 12/02/01 14:08:10 INFO util.GSet: recommended=2097152, actual=2097152
> 12/02/01 14:08:10 INFO security.UserGroupInformation: JAAS Configuration
> already set up for Hadoop, not re-installing.
> 12/02/01 14:08:10 INFO namenode.FSNamesystem: fsOwner=vijayram (auth:SIMPLE)
> 12/02/01 14:08:10 INFO namenode.FSNamesystem: supergroup=supergroup
> 12/02/01 14:08:10 INFO namenode.FSNamesystem: isPermissionEnabled=false
> 12/02/01 14:08:10 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=1000
> 12/02/01 14:08:10 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 12/02/01 14:08:10 ERROR namenode.NameNode: java.io.IOException: Cannot
> remove current directory: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/current
>    at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:292)
>    at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1246)
>    at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1265)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1127)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1244)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1260)
>
>
> Thanks and Regards
> Vijay
>
> nellaivi...@gmail.com



-- 
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about