[jira] [Assigned] (HDFS-2821) Improve the Balancer to move data from over utilized nodes to under utilized nodes using balanced nodes

2015-02-11 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K reassigned HDFS-2821:
---

Assignee: (was: Devaraj K)

> Improve the Balancer to move data from over utilized nodes to under utilized 
> nodes using balanced nodes
> ---
>
> Key: HDFS-2821
> URL: https://issues.apache.org/jira/browse/HDFS-2821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.20.205.0, 0.24.0, 0.23.1
>Reporter: Devaraj K
>
> h5.Cluster State Before Balancer Run:
> ||Node||Last Contact||Admin 
> State||Configured||Capacity(TB)||Used(TB)||Remaining(TB)||Used(%)||Remaining(%)||Blocks||
> |xxx-x-xx-n1|0|In Service|4.25|1.76|  0.84|1.65|41.34|38.86|8465|
> |xxx-x-xx-n2|1|In Service|6.03|1.76|0.94  |3.33|29.1|55.24|8465|
> |xxx-x-xx-n3|2|In Service|6.93|1.76|0.99 |4.18|25.35|60.31|8465|
> |xxx-x-xx-n4|2|In Service|10.5|0|0.54|9.97|0|94.9|0|
> \\
> \\
> h5.Cluster State After Balancer Run:
> ||Node||Last Contact||Admin 
> State||Configured||Capacity(TB)||Used(TB)||Remaining(TB)||Used(%)||Remaining(%)||Blocks||
> |xxx-x-xx-n1|2|In Service|4.25|0.95|0.84|2.46|22.36|57.84|4830|
> |xxx-x-xx-n2|1|In Service|6.03|1.2|0.94|3.88|19.95|64.4|5858|
> |xxx-x-xx-n3|0|In Service|6.93|1.38|0.99|4.56|19.9|65.76|6327|
> |xxx-x-xx-n4|2|In Service|10.5|1.74|0.54|8.23|16.53|78.37|8383|
> \\
> Currently balancer moves the data from over utilized nodes to the under 
> utilized nodes and this process continues till the cluster balanced or there 
> is no data to move from source to destination. In this process if some nodes 
> usage comes to avgUtilization these will not be participated in the balance 
> process further.
> The above table shows the cluster usage before the balancer run and after 
> balancer run using 1 as threshold. After balancer completion, still n1 is 
> over utilized and n4 is under utilized. This may be because of n4 contains 
> all the blocks which are present in n1.  I feel this can be improved further 
> by moving data from over utilized nodes to balanced nodes and then balanced 
> nodes to under utilized nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-3847) using NFS As a shared storage for NameNode HA , how to ensure that only one write

2012-08-28 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K reassigned HDFS-3847:
---

Assignee: (was: Devaraj K)

> using NFS As a shared storage for NameNode HA , how to ensure that only one 
> write
> -
>
> Key: HDFS-3847
> URL: https://issues.apache.org/jira/browse/HDFS-3847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.0.0-alpha, 2.0.1-alpha
>Reporter: liaowenrui
>Priority: Critical
> Fix For: 2.0.0-alpha
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-3847) using NFS As a shared storage for NameNode HA , how to ensure that only one write

2012-08-28 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K reassigned HDFS-3847:
---

Assignee: Devaraj K

> using NFS As a shared storage for NameNode HA , how to ensure that only one 
> write
> -
>
> Key: HDFS-3847
> URL: https://issues.apache.org/jira/browse/HDFS-3847
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.0.0-alpha, 2.0.1-alpha
>Reporter: liaowenrui
>Assignee: Devaraj K
>Priority: Critical
> Fix For: 2.0.0-alpha
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3536) Node Manager leaks socket connections connected to Data Node

2012-06-14 Thread Devaraj K (JIRA)
Devaraj K created HDFS-3536:
---

 Summary: Node Manager leaks socket connections connected to Data 
Node
 Key: HDFS-3536
 URL: https://issues.apache.org/jira/browse/HDFS-3536
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Devaraj K
Assignee: Devaraj K
Priority: Critical


I am running simple wordcount example with default configurations, for every 
job run it increases one datanode socket connection and it will be there in 
CLOSE_WAIT state forever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2324) start-dfs.sh and stop-dfs.sh are not working properly

2011-09-19 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HDFS-2324:


Resolution: Duplicate
Status: Resolved  (was: Patch Available)

It will be taken care as part of HADOOP-7642.

> start-dfs.sh and stop-dfs.sh are not working properly
> -
>
> Key: HDFS-2324
> URL: https://issues.apache.org/jira/browse/HDFS-2324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.24.0
>Reporter: Devaraj K
> Fix For: 0.24.0
>
> Attachments: HDFS-2324.patch
>
>
> When we execute start-dfs.sh, it is gving the below error.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./start-dfs.sh
> ./start-dfs.sh: line 50: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Starting namenodes on []
> ./start-dfs.sh: line 55: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./start-dfs.sh: line 68: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot start secondary namenodes.
> {code}
> It is gving the below error when we execute stop-dfs.sh.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./stop-dfs.sh
> ./stop-dfs.sh: line 26: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Stopping namenodes on []
> ./stop-dfs.sh: line 31: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./stop-dfs.sh: line 44: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot stop secondary namenodes.
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2324) start-dfs.sh and stop-dfs.sh are not working properly

2011-09-14 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13105114#comment-13105114
 ] 

Devaraj K commented on HDFS-2324:
-

I think it would be better option of generating combined tar ball instead of 
overlaying Common and HDFS directories after generating individual tar balls.

> start-dfs.sh and stop-dfs.sh are not working properly
> -
>
> Key: HDFS-2324
> URL: https://issues.apache.org/jira/browse/HDFS-2324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.24.0
>Reporter: Devaraj K
> Fix For: 0.24.0
>
> Attachments: HDFS-2324.patch
>
>
> When we execute start-dfs.sh, it is gving the below error.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./start-dfs.sh
> ./start-dfs.sh: line 50: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Starting namenodes on []
> ./start-dfs.sh: line 55: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./start-dfs.sh: line 68: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot start secondary namenodes.
> {code}
> It is gving the below error when we execute stop-dfs.sh.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./stop-dfs.sh
> ./stop-dfs.sh: line 26: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Stopping namenodes on []
> ./stop-dfs.sh: line 31: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./stop-dfs.sh: line 44: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot stop secondary namenodes.
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2324) start-dfs.sh and stop-dfs.sh are not working properly

2011-09-14 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13104839#comment-13104839
 ] 

Devaraj K commented on HDFS-2324:
-

Thanks Tom for the info.

I am not overlaying the Common and HDFS directories. I am not getting any 
problem other than "/bin/hdfs: No such file or directory" for secondary name 
nodes. 

I feel it is better to handle this error also. Whoever doesn't want to overlay 
the Common and HDFS directories they can use without facing any problems.   

Please provide your opinion on this.

> start-dfs.sh and stop-dfs.sh are not working properly
> -
>
> Key: HDFS-2324
> URL: https://issues.apache.org/jira/browse/HDFS-2324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.24.0
>Reporter: Devaraj K
> Fix For: 0.24.0
>
> Attachments: HDFS-2324.patch
>
>
> When we execute start-dfs.sh, it is gving the below error.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./start-dfs.sh
> ./start-dfs.sh: line 50: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Starting namenodes on []
> ./start-dfs.sh: line 55: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./start-dfs.sh: line 68: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot start secondary namenodes.
> {code}
> It is gving the below error when we execute stop-dfs.sh.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./stop-dfs.sh
> ./stop-dfs.sh: line 26: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Stopping namenodes on []
> ./stop-dfs.sh: line 31: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./stop-dfs.sh: line 44: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot stop secondary namenodes.
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2324) start-dfs.sh and stop-dfs.sh are not working properly

2011-09-14 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HDFS-2324:


Status: Patch Available  (was: Open)

Provided a trivial patch to fix this issue. Please review it.

> start-dfs.sh and stop-dfs.sh are not working properly
> -
>
> Key: HDFS-2324
> URL: https://issues.apache.org/jira/browse/HDFS-2324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.24.0
>Reporter: Devaraj K
> Fix For: 0.24.0
>
> Attachments: HDFS-2324.patch
>
>
> When we execute start-dfs.sh, it is gving the below error.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./start-dfs.sh
> ./start-dfs.sh: line 50: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Starting namenodes on []
> ./start-dfs.sh: line 55: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./start-dfs.sh: line 68: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot start secondary namenodes.
> {code}
> It is gving the below error when we execute stop-dfs.sh.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./stop-dfs.sh
> ./stop-dfs.sh: line 26: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Stopping namenodes on []
> ./stop-dfs.sh: line 31: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./stop-dfs.sh: line 44: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot stop secondary namenodes.
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2324) start-dfs.sh and stop-dfs.sh are not working properly

2011-09-14 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13104596#comment-13104596
 ] 

Devaraj K commented on HDFS-2324:
-

Tom, Name node and datanodes are starting fine after the patch HDFS-2323.But 
still it is giving "No such file or directory" error on the console.

{code:xml}
./start-dfs.sh: line 50: 
/home/dev/HadoopRelease/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs: No 
such file or directory
Starting namenodes on []
localhost: starting namenode, logging to 
/home/dev/HadoopRelease/hadoop-common-0.24.0-SNAPSHOT/libexec/../logs/hadoop-root-namenode-linux-fr5y.out
localhost: starting datanode, logging to 
/home/dev/HadoopRelease/hadoop-common-0.24.0-SNAPSHOT/libexec/../logs/hadoop-root-datanode-linux-fr5y.out
Secondary namenodes are not configured.  Cannot start secondary namenodes.
{code}

{code:xml}
./stop-dfs.sh: line 26: 
/home/dev/HadoopRelease/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs: No 
such file or directory
Stopping namenodes on []
localhost: stopping namenode
localhost: stopping datanode
Secondary namenodes are not configured.  Cannot stop secondary namenodes.
{code}


> start-dfs.sh and stop-dfs.sh are not working properly
> -
>
> Key: HDFS-2324
> URL: https://issues.apache.org/jira/browse/HDFS-2324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.24.0
>Reporter: Devaraj K
> Fix For: 0.24.0
>
> Attachments: HDFS-2324.patch
>
>
> When we execute start-dfs.sh, it is gving the below error.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./start-dfs.sh
> ./start-dfs.sh: line 50: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Starting namenodes on []
> ./start-dfs.sh: line 55: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./start-dfs.sh: line 68: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot start secondary namenodes.
> {code}
> It is gving the below error when we execute stop-dfs.sh.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./stop-dfs.sh
> ./stop-dfs.sh: line 26: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Stopping namenodes on []
> ./stop-dfs.sh: line 31: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./stop-dfs.sh: line 44: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot stop secondary namenodes.
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2324) start-dfs.sh and stop-dfs.sh are not working properly

2011-09-14 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HDFS-2324:


Attachment: HDFS-2324.patch

> start-dfs.sh and stop-dfs.sh are not working properly
> -
>
> Key: HDFS-2324
> URL: https://issues.apache.org/jira/browse/HDFS-2324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.24.0
>Reporter: Devaraj K
> Fix For: 0.24.0
>
> Attachments: HDFS-2324.patch
>
>
> When we execute start-dfs.sh, it is gving the below error.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./start-dfs.sh
> ./start-dfs.sh: line 50: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Starting namenodes on []
> ./start-dfs.sh: line 55: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./start-dfs.sh: line 68: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot start secondary namenodes.
> {code}
> It is gving the below error when we execute stop-dfs.sh.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./stop-dfs.sh
> ./stop-dfs.sh: line 26: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Stopping namenodes on []
> ./stop-dfs.sh: line 31: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./stop-dfs.sh: line 44: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot stop secondary namenodes.
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Moved] (HDFS-2324) start-dfs.sh and stop-dfs.sh are not working properly

2011-09-09 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K moved HADOOP-7618 to HDFS-2324:
-

  Component/s: (was: scripts)
   scripts
Fix Version/s: (was: 0.24.0)
   0.24.0
 Assignee: (was: Devaraj K)
Affects Version/s: (was: 0.24.0)
   0.24.0
  Key: HDFS-2324  (was: HADOOP-7618)
  Project: Hadoop HDFS  (was: Hadoop Common)

> start-dfs.sh and stop-dfs.sh are not working properly
> -
>
> Key: HDFS-2324
> URL: https://issues.apache.org/jira/browse/HDFS-2324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.24.0
>Reporter: Devaraj K
> Fix For: 0.24.0
>
>
> When we execute start-dfs.sh, it is gving the below error.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./start-dfs.sh
> ./start-dfs.sh: line 50: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Starting namenodes on []
> ./start-dfs.sh: line 55: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./start-dfs.sh: line 68: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot start secondary namenodes.
> {code}
> It is gving the below error when we execute stop-dfs.sh.
> {code:xml}
> linux124:/home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-hdfs-0.24.0-SNAPSHOT/sbin
>  # ./stop-dfs.sh
> ./stop-dfs.sh: line 26: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hdfs:
>  No such file or directory
> Stopping namenodes on []
> ./stop-dfs.sh: line 31: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> ./stop-dfs.sh: line 44: 
> /home/devaraj/NextGenMR/Hadoop-0.24-09082011/hadoop-common-0.24.0-SNAPSHOT/libexec/../bin/hadoop-daemons.sh:
>  No such file or directory
> Secondary namenodes are not configured.  Cannot stop secondary namenodes.
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able to recover

2011-03-14 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13006368#comment-13006368
 ] 

Devaraj K commented on HDFS-1594:
-

During write operation, if the name node encounters disk full condition for 
updating the fsimage it is abruptly writing till disk space available and 
immediately name node is shutting down.

When we restart name node, it is trying to read the fsimage for initialization 
of the name system and it is giving the EOFException as shown in the 
description because the fsimage is not having the expected data. Please find 
the exception stack trace in the description for the exact failure point.


> When the disk becomes full Namenode is getting shutdown and not able to 
> recover
> ---
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0, 0.21.1, 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Fix For: 0.23.0
>
> Attachments: HDFS-1594.patch, HDFS-1594.patch, HDFS-1594.patch, 
> hadoop-root-namenode-linux124.log
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)

[jira] Updated: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able to recover

2011-03-13 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HDFS-1594:


Fix Version/s: 0.23.0
   Status: Patch Available  (was: Open)

> When the disk becomes full Namenode is getting shutdown and not able to 
> recover
> ---
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0, 0.21.1, 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Fix For: 0.23.0
>
> Attachments: HDFS-1594.patch, HDFS-1594.patch, HDFS-1594.patch, 
> hadoop-root-namenode-linux124.log
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,730 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 
> /
> SHUTDOWN_MSG: Shutting down NameNode at linux124/

[jira] Commented: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able to recover

2011-02-25 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12999375#comment-12999375
 ] 

Devaraj K commented on HDFS-1594:
-

Thanks Konstantin, changes are good.

> When the disk becomes full Namenode is getting shutdown and not able to 
> recover
> ---
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0, 0.21.1, 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Attachments: HDFS-1594.patch, HDFS-1594.patch, HDFS-1594.patch, 
> hadoop-root-namenode-linux124.log
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,730 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 
> /
> SHUTDOWN_MSG: Shutting down NameNode at linux124/10.18.52.124

[jira] Commented: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able to recover

2011-01-28 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12987985#action_12987985
 ] 

Devaraj K commented on HDFS-1594:
-

The submitted patch was prepared for 0.22.0 branch and some unnecessary spaces 
have introduced in the patch file which are causing difficulty for review. I 
will resubmit the patch for trunk and by fixing all the comments given above. 

> When the disk becomes full Namenode is getting shutdown and not able to 
> recover
> ---
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0, 0.21.1, 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Attachments: hadoop-root-namenode-linux124.log, HDFS-1594.patch
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,730 INFO org.apache.hadoop.hdfs.server.na

[jira] Updated: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able to recover

2011-01-25 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HDFS-1594:


Status: Open  (was: Patch Available)

> When the disk becomes full Namenode is getting shutdown and not able to 
> recover
> ---
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0, 0.21.1, 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Attachments: hadoop-root-namenode-linux124.log, HDFS-1594.patch
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,730 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 
> /
> SHUTDOWN_MSG: Shutting down NameNode at linux124/10.18.52.124
> /
> {code} 

-- 
This messa

[jira] Commented: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able to recover

2011-01-24 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12986238#action_12986238
 ] 

Devaraj K commented on HDFS-1594:
-

Thanks Konstantin. I will do all the changes and test with the trunk  and I 
will submit the patch.

> When the disk becomes full Namenode is getting shutdown and not able to 
> recover
> ---
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0, 0.21.1, 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Attachments: hadoop-root-namenode-linux124.log, HDFS-1594.patch
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,730 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 
> /
> SHUTDOWN_MSG: Shutting down NameNode

[jira] Commented: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able to recover

2011-01-24 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12986222#action_12986222
 ] 

Devaraj K commented on HDFS-1594:
-

It passed in my system and shown +1 but it failed to apply the patch in Hudson. 

Below is the result shown in my system, 

{code:xml} 
 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 3 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
 [exec] +1 system test framework.  The patch passed system test 
framework compile.
 [exec] 
 [exec] 
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 

BUILD SUCCESSFUL
Total time: 22 minutes 46 seconds
{code} 


> When the disk becomes full Namenode is getting shutdown and not able to 
> recover
> ---
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0, 0.21.1, 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Attachments: hadoop-root-namenode-linux124.log, HDFS-1594.patch
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)

[jira] Updated: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able to recover

2011-01-24 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HDFS-1594:


Affects Version/s: 0.21.1
   0.21.0
 Release Note: 
Implemented a daemon thread to monitor the disk usage for periodically and if 
the disk usage reaches the threshold value, put the name node into Safe mode so 
that no modification to file system will occur. Once the disk usage reaches 
below the threshold, name node will be put out of the safe mode. Here threshold 
value and interval to check the disk usage are configurable. 

   Status: Patch Available  (was: Open)

Provided the patch as per above solution.


> When the disk becomes full Namenode is getting shutdown and not able to 
> recover
> ---
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0, 0.21.1, 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Attachments: hadoop-root-namenode-linux124.log, HDFS-1594.patch
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode

[jira] Updated: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able to recover

2011-01-24 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HDFS-1594:


Attachment: HDFS-1594.patch

> When the disk becomes full Namenode is getting shutdown and not able to 
> recover
> ---
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Attachments: hadoop-root-namenode-linux124.log, HDFS-1594.patch
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,730 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 
> /
> SHUTDOWN_MSG: Shutting down NameNode at linux124/10.18.52.124
> /
> {code} 

-- 
This message is automatically gener

[jira] Updated: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able to recover

2011-01-24 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HDFS-1594:


Summary: When the disk becomes full Namenode is getting shutdown and not 
able to recover  (was: When the disk becomes full Namenode is getting shutdown 
and not able recover)

> When the disk becomes full Namenode is getting shutdown and not able to 
> recover
> ---
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Attachments: hadoop-root-namenode-linux124.log, HDFS-1594.patch
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,730 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 
> /
> SHUTDOWN_MSG: Shutting down 

[jira] Commented: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able recover

2011-01-24 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12985686#action_12985686
 ] 

Devaraj K commented on HDFS-1594:
-

When the disk becomes full, name node file system (fsimage, edits) is getting 
corrupted and also name node is getting shutdown. When we try to restart, name 
node is not starting because the name node file system is corrupted. 

This can be avoided this way,

We can implement a daemon to monitor the disk usage for periodically and if the 
disk usage reaches the threshold value, put the name node into Safe mode so 
that no modification to file system will occur. Once the disk usage reaches 
below the threshold, name node will be put out of the safe mode. 


> When the disk becomes full Namenode is getting shutdown and not able recover
> 
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Attachments: hadoop-root-namenode-linux124.log
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.n

[jira] Created: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able recover

2011-01-24 Thread Devaraj K (JIRA)
When the disk becomes full Namenode is getting shutdown and not able recover


 Key: HDFS-1594
 URL: https://issues.apache.org/jira/browse/HDFS-1594
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0
 Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux

Reporter: Devaraj K
 Attachments: hadoop-root-namenode-linux124.log

When the disk becomes full name node is shutting down and if we try to start 
after making the space available It is not starting and throwing the below 
exception.



{code:xml} 

2011-01-24 23:23:33,727 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.
java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:180)
at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
2011-01-24 23:23:33,729 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:180)
at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)

2011-01-24 23:23:33,730 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG: 
/
SHUTDOWN_MSG: Shutting down NameNode at linux124/10.18.52.124
/


{code} 


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1594) When the disk becomes full Namenode is getting shutdown and not able recover

2011-01-24 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HDFS-1594:


Attachment: hadoop-root-namenode-linux124.log

> When the disk becomes full Namenode is getting shutdown and not able recover
> 
>
> Key: HDFS-1594
> URL: https://issues.apache.org/jira/browse/HDFS-1594
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0
> Environment: Linux linux124 2.6.27.19-5-default #1 SMP 2009-02-28 
> 04:40:21 +0100 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Devaraj K
> Attachments: hadoop-root-namenode-linux124.log
>
>
> When the disk becomes full name node is shutting down and if we try to start 
> after making the space available It is not starting and throwing the below 
> exception.
> {code:xml} 
> 2011-01-24 23:23:33,727 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,729 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>   at org.apache.hadoop.io.UTF8.readFields(UTF8.java:117)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:201)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:185)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1089)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1041)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:487)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:149)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:306)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:284)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:328)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:356)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:577)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:570)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1529)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1538)
> 2011-01-24 23:23:33,730 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 
> /
> SHUTDOWN_MSG: Shutting down NameNode at linux124/10.18.52.124
> /
> {code} 

-- 
This message is automatically generated by