Re: I am about to lose all my data please help

2014-03-23 Thread Fatih Haltas
No, not ofcourse I blinded it.


On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar praveen...@gmail.comwrote:

 Is this property correct ?


 property
 namefs.default.name/name
 value-BLANKED/value
   /property

 Regards
 Prav


 On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas fatih.hal...@nyu.eduwrote:

 Thanks for you helps, but still could not solve my problem.


 On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi s...@gopivotal.com wrote:

 Ah yes, I overlooked this. Then please check the file are there or not:
 ls /home/hadoop/project/hadoop-data/dfs/name?

 Regards,
 *Stanley Shi,*



 On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu azury...@gmail.com wrote:

 I don't think this is the case, because there is;
   property
 namehadoop.tmp.dir/name
 value/home/hadoop/project/hadoop-data/value
   /property


 On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi s...@gopivotal.comwrote:

 one possible reason is that you didn't set the namenode working
 directory, by default it's in /tmp folder; and the /tmp folder might
 get deleted by the OS without any notification. If this is the case, I am
 afraid you have lost all your namenode data.

 *property
   namedfs.name.dir/name
   value${hadoop.tmp.dir}/dfs/name/value
   descriptionDetermines where on the local filesystem the DFS name node
   should store the name table(fsimage).  If this is a comma-delimited 
 list
   of directories then the name table is replicated in all of the
   directories, for redundancy. /description
 /property*


 Regards,
 *Stanley Shi,*



 On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf 
 mirko.kae...@gmail.comwrote:

 Hi,

 what is the location of the namenodes fsimage and editlogs?
 And how much memory has the NameNode.

 Did you work with a Secondary NameNode or a Standby NameNode for
 checkpointing?

 Where are your HDFS blocks located, are those still safe?

 With this information at hand, one might be able to fix your setup,
 but do not format the old namenode before
 all is working with a fresh one.

 Grab a copy of the maintainance guide:
 http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
 which helps solving such type of problems as well.

 Best wishes
 Mirko


 2014-03-16 9:07 GMT+00:00 Fatih Haltas fatih.hal...@nyu.edu:

 Dear All,

 I have just restarted machines of my hadoop clusters. Now, I am
 trying to restart hadoop clusters again, but getting error on namenode
 restart. I am afraid of loosing my data as it was properly running for 
 more
 than 3 months. Currently, I believe if I do namenode formatting, it will
 work again, however, data will be lost. Is there anyway to solve this
 without losing the data.

 I will really appreciate any help.

 Thanks.


 =
 Here is the logs;
 
 2014-02-26 16:02:39,698 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 1.0.4
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 
 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
 /
 2014-02-26 16:02:40,005 INFO
 org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
 hadoop-metrics2.properties
 2014-02-26 16:02:40,019 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 MetricsSystem,sub=Stats registered.
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
 period at 10 second(s).
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics 
 system
 started
 2014-02-26 16:02:40,169 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
 ugi
 registered.
 2014-02-26 16:02:40,193 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
 jvm
 registered.
 2014-02-26 16:02:40,194 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 NameNode registered.
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
 type   = 64-bit
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
 max memory = 17.77875 MB
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 capacity  = 2^21 = 2097152 entries
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 recommended=2097152, actual=2097152
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 supergroup=supergroup
 2014-02-26 16:02:40,274 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isPermissionEnabled=true
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 

Re: I am about to lose all my data please help

2014-03-23 Thread Stanley Shi
Can you confirm that you namenode image and fseditlog are still there? if
not, then your data IS lost.

Regards,
*Stanley Shi,*



On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas fatih.hal...@nyu.edu wrote:

 No, not ofcourse I blinded it.


 On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar praveen...@gmail.comwrote:

 Is this property correct ?


 property
 namefs.default.name/name
 value-BLANKED/value
   /property

 Regards
 Prav


 On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas fatih.hal...@nyu.eduwrote:

 Thanks for you helps, but still could not solve my problem.


 On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi s...@gopivotal.comwrote:

 Ah yes, I overlooked this. Then please check the file are there or not:
 ls /home/hadoop/project/hadoop-data/dfs/name?

 Regards,
 *Stanley Shi,*



 On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu azury...@gmail.com wrote:

 I don't think this is the case, because there is;
   property
 namehadoop.tmp.dir/name
 value/home/hadoop/project/hadoop-data/value
   /property


 On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi s...@gopivotal.comwrote:

 one possible reason is that you didn't set the namenode working
 directory, by default it's in /tmp folder; and the /tmp folder might
 get deleted by the OS without any notification. If this is the case, I am
 afraid you have lost all your namenode data.

 *property
   namedfs.name.dir/name
   value${hadoop.tmp.dir}/dfs/name/value
   descriptionDetermines where on the local filesystem the DFS name node
   should store the name table(fsimage).  If this is a 
 comma-delimited list
   of directories then the name table is replicated in all of the
   directories, for redundancy. /description
 /property*


 Regards,
 *Stanley Shi,*



 On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf 
 mirko.kae...@gmail.comwrote:

 Hi,

 what is the location of the namenodes fsimage and editlogs?
 And how much memory has the NameNode.

 Did you work with a Secondary NameNode or a Standby NameNode for
 checkpointing?

 Where are your HDFS blocks located, are those still safe?

 With this information at hand, one might be able to fix your setup,
 but do not format the old namenode before
 all is working with a fresh one.

 Grab a copy of the maintainance guide:
 http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
 which helps solving such type of problems as well.

 Best wishes
 Mirko


 2014-03-16 9:07 GMT+00:00 Fatih Haltas fatih.hal...@nyu.edu:

 Dear All,

 I have just restarted machines of my hadoop clusters. Now, I am
 trying to restart hadoop clusters again, but getting error on namenode
 restart. I am afraid of loosing my data as it was properly running for 
 more
 than 3 months. Currently, I believe if I do namenode formatting, it 
 will
 work again, however, data will be lost. Is there anyway to solve this
 without losing the data.

 I will really appreciate any help.

 Thanks.


 =
 Here is the logs;
 
 2014-02-26 16:02:39,698 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 1.0.4
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 
 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
 /
 2014-02-26 16:02:40,005 INFO
 org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
 hadoop-metrics2.properties
 2014-02-26 16:02:40,019 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 MetricsSystem,sub=Stats registered.
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
 period at 10 second(s).
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics 
 system
 started
 2014-02-26 16:02:40,169 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
 ugi
 registered.
 2014-02-26 16:02:40,193 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
 jvm
 registered.
 2014-02-26 16:02:40,194 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 NameNode registered.
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
 type   = 64-bit
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
 max memory = 17.77875 MB
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 capacity  = 2^21 = 2097152 entries
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 recommended=2097152, actual=2097152
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
 supergroup=supergroup
 2014-02-26 

Re: I am about to lose all my data please help

2014-03-19 Thread Fatih Haltas
Thanks for you helps, but still could not solve my problem.


On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi s...@gopivotal.com wrote:

 Ah yes, I overlooked this. Then please check the file are there or not:
 ls /home/hadoop/project/hadoop-data/dfs/name?

 Regards,
 *Stanley Shi,*



 On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu azury...@gmail.com wrote:

 I don't think this is the case, because there is;
   property
 namehadoop.tmp.dir/name
 value/home/hadoop/project/hadoop-data/value
   /property


 On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi s...@gopivotal.com wrote:

 one possible reason is that you didn't set the namenode working
 directory, by default it's in /tmp folder; and the /tmp folder might
 get deleted by the OS without any notification. If this is the case, I am
 afraid you have lost all your namenode data.

 *property
   namedfs.name.dir/name
   value${hadoop.tmp.dir}/dfs/name/value
   descriptionDetermines where on the local filesystem the DFS name node
   should store the name table(fsimage).  If this is a comma-delimited 
 list
   of directories then the name table is replicated in all of the
   directories, for redundancy. /description
 /property*


 Regards,
 *Stanley Shi,*



 On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf mirko.kae...@gmail.comwrote:

 Hi,

 what is the location of the namenodes fsimage and editlogs?
 And how much memory has the NameNode.

 Did you work with a Secondary NameNode or a Standby NameNode for
 checkpointing?

 Where are your HDFS blocks located, are those still safe?

 With this information at hand, one might be able to fix your setup, but
 do not format the old namenode before
 all is working with a fresh one.

 Grab a copy of the maintainance guide:
 http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
 which helps solving such type of problems as well.

 Best wishes
 Mirko


 2014-03-16 9:07 GMT+00:00 Fatih Haltas fatih.hal...@nyu.edu:

 Dear All,

 I have just restarted machines of my hadoop clusters. Now, I am trying
 to restart hadoop clusters again, but getting error on namenode restart. I
 am afraid of loosing my data as it was properly running for more than 3
 months. Currently, I believe if I do namenode formatting, it will work
 again, however, data will be lost. Is there anyway to solve this without
 losing the data.

 I will really appreciate any help.

 Thanks.


 =
 Here is the logs;
 
 2014-02-26 16:02:39,698 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 1.0.4
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
 /
 2014-02-26 16:02:40,005 INFO
 org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
 hadoop-metrics2.properties
 2014-02-26 16:02:40,019 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 MetricsSystem,sub=Stats registered.
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
 period at 10 second(s).
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
 started
 2014-02-26 16:02:40,169 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
 registered.
 2014-02-26 16:02:40,193 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
 registered.
 2014-02-26 16:02:40,194 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 NameNode registered.
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
   = 64-bit
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
 memory = 17.77875 MB
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 capacity  = 2^21 = 2097152 entries
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 recommended=2097152, actual=2097152
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
 2014-02-26 16:02:40,274 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isPermissionEnabled=true
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 dfs.block.invalidate.limit=100
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
 accessTokenLifetime=0 min(s)
 2014-02-26 16:02:40,724 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
 FSNamesystemStateMBean and NameNodeMXBean
 

Re: I am about to lose all my data please help

2014-03-18 Thread Azuryy Yu
  property
namehadoop.tmp.dir/name
value/home/hadoop/project/hadoop-data/value
  /property


On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu azury...@gmail.com wrote:

 I don't think this is the case, because there is;
   property
 namehadoop.tmp.dir/name
 value/home/hadoop/project/hadoop-data/value
   /property


 On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi s...@gopivotal.com wrote:

 one possible reason is that you didn't set the namenode working
 directory, by default it's in /tmp folder; and the /tmp folder might
 get deleted by the OS without any notification. If this is the case, I am
 afraid you have lost all your namenode data.

 *property
   namedfs.name.dir/name
   value${hadoop.tmp.dir}/dfs/name/value
   descriptionDetermines where on the local filesystem the DFS name node
   should store the name table(fsimage).  If this is a comma-delimited 
 list
   of directories then the name table is replicated in all of the
   directories, for redundancy. /description
 /property*


 Regards,
 *Stanley Shi,*



 On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf mirko.kae...@gmail.comwrote:

 Hi,

 what is the location of the namenodes fsimage and editlogs?
 And how much memory has the NameNode.

 Did you work with a Secondary NameNode or a Standby NameNode for
 checkpointing?

 Where are your HDFS blocks located, are those still safe?

 With this information at hand, one might be able to fix your setup, but
 do not format the old namenode before
 all is working with a fresh one.

 Grab a copy of the maintainance guide:
 http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
 which helps solving such type of problems as well.

 Best wishes
 Mirko


 2014-03-16 9:07 GMT+00:00 Fatih Haltas fatih.hal...@nyu.edu:

 Dear All,

 I have just restarted machines of my hadoop clusters. Now, I am trying
 to restart hadoop clusters again, but getting error on namenode restart. I
 am afraid of loosing my data as it was properly running for more than 3
 months. Currently, I believe if I do namenode formatting, it will work
 again, however, data will be lost. Is there anyway to solve this without
 losing the data.

 I will really appreciate any help.

 Thanks.


 =
 Here is the logs;
 
 2014-02-26 16:02:39,698 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 1.0.4
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
 /
 2014-02-26 16:02:40,005 INFO
 org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
 hadoop-metrics2.properties
 2014-02-26 16:02:40,019 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 MetricsSystem,sub=Stats registered.
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
 period at 10 second(s).
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
 started
 2014-02-26 16:02:40,169 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
 registered.
 2014-02-26 16:02:40,193 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
 registered.
 2014-02-26 16:02:40,194 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 NameNode registered.
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
   = 64-bit
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
 memory = 17.77875 MB
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
  = 2^21 = 2097152 entries
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 recommended=2097152, actual=2097152
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
 2014-02-26 16:02:40,274 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isPermissionEnabled=true
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 dfs.block.invalidate.limit=100
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
 accessTokenLifetime=0 min(s)
 2014-02-26 16:02:40,724 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
 FSNamesystemStateMBean and NameNodeMXBean
 2014-02-26 16:02:40,749 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
 occuring more than 10 times
 2014-02-26 16:02:40,780 ERROR
 

Re: I am about to lose all my data please help

2014-03-18 Thread Azuryy Yu
I don't think this is the case, because there is;
  property
namehadoop.tmp.dir/name
value/home/hadoop/project/hadoop-data/value
  /property


On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi s...@gopivotal.com wrote:

 one possible reason is that you didn't set the namenode working directory,
 by default it's in /tmp folder; and the /tmp folder might get deleted
 by the OS without any notification. If this is the case, I am afraid you
 have lost all your namenode data.

 *property
   namedfs.name.dir/name
   value${hadoop.tmp.dir}/dfs/name/value
   descriptionDetermines where on the local filesystem the DFS name node
   should store the name table(fsimage).  If this is a comma-delimited list
   of directories then the name table is replicated in all of the
   directories, for redundancy. /description
 /property*


 Regards,
 *Stanley Shi,*



 On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf mirko.kae...@gmail.comwrote:

 Hi,

 what is the location of the namenodes fsimage and editlogs?
 And how much memory has the NameNode.

 Did you work with a Secondary NameNode or a Standby NameNode for
 checkpointing?

 Where are your HDFS blocks located, are those still safe?

 With this information at hand, one might be able to fix your setup, but
 do not format the old namenode before
 all is working with a fresh one.

 Grab a copy of the maintainance guide:
 http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
 which helps solving such type of problems as well.

 Best wishes
 Mirko


 2014-03-16 9:07 GMT+00:00 Fatih Haltas fatih.hal...@nyu.edu:

 Dear All,

 I have just restarted machines of my hadoop clusters. Now, I am trying
 to restart hadoop clusters again, but getting error on namenode restart. I
 am afraid of loosing my data as it was properly running for more than 3
 months. Currently, I believe if I do namenode formatting, it will work
 again, however, data will be lost. Is there anyway to solve this without
 losing the data.

 I will really appreciate any help.

 Thanks.


 =
 Here is the logs;
 
 2014-02-26 16:02:39,698 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 1.0.4
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
 /
 2014-02-26 16:02:40,005 INFO
 org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
 hadoop-metrics2.properties
 2014-02-26 16:02:40,019 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 MetricsSystem,sub=Stats registered.
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
 period at 10 second(s).
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
 started
 2014-02-26 16:02:40,169 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
 registered.
 2014-02-26 16:02:40,193 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
 registered.
 2014-02-26 16:02:40,194 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 NameNode registered.
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
 = 64-bit
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
 memory = 17.77875 MB
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
  = 2^21 = 2097152 entries
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 recommended=2097152, actual=2097152
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
 2014-02-26 16:02:40,274 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isPermissionEnabled=true
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 dfs.block.invalidate.limit=100
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
 accessTokenLifetime=0 min(s)
 2014-02-26 16:02:40,724 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
 FSNamesystemStateMBean and NameNodeMXBean
 2014-02-26 16:02:40,749 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
 occuring more than 10 times
 2014-02-26 16:02:40,780 ERROR
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
 initialization failed.
 java.io.IOException: NameNode is not formatted.
 at
 

Re: I am about to lose all my data please help

2014-03-18 Thread Stanley Shi
Ah yes, I overlooked this. Then please check the file are there or not: ls
/home/hadoop/project/hadoop-data/dfs/name?

Regards,
*Stanley Shi,*



On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu azury...@gmail.com wrote:

 I don't think this is the case, because there is;
   property
 namehadoop.tmp.dir/name
 value/home/hadoop/project/hadoop-data/value
   /property


 On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi s...@gopivotal.com wrote:

 one possible reason is that you didn't set the namenode working
 directory, by default it's in /tmp folder; and the /tmp folder might
 get deleted by the OS without any notification. If this is the case, I am
 afraid you have lost all your namenode data.

 *property
   namedfs.name.dir/name
   value${hadoop.tmp.dir}/dfs/name/value
   descriptionDetermines where on the local filesystem the DFS name node
   should store the name table(fsimage).  If this is a comma-delimited 
 list
   of directories then the name table is replicated in all of the
   directories, for redundancy. /description
 /property*


 Regards,
 *Stanley Shi,*



 On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf mirko.kae...@gmail.comwrote:

 Hi,

 what is the location of the namenodes fsimage and editlogs?
 And how much memory has the NameNode.

 Did you work with a Secondary NameNode or a Standby NameNode for
 checkpointing?

 Where are your HDFS blocks located, are those still safe?

 With this information at hand, one might be able to fix your setup, but
 do not format the old namenode before
 all is working with a fresh one.

 Grab a copy of the maintainance guide:
 http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
 which helps solving such type of problems as well.

 Best wishes
 Mirko


 2014-03-16 9:07 GMT+00:00 Fatih Haltas fatih.hal...@nyu.edu:

 Dear All,

 I have just restarted machines of my hadoop clusters. Now, I am trying
 to restart hadoop clusters again, but getting error on namenode restart. I
 am afraid of loosing my data as it was properly running for more than 3
 months. Currently, I believe if I do namenode formatting, it will work
 again, however, data will be lost. Is there anyway to solve this without
 losing the data.

 I will really appreciate any help.

 Thanks.


 =
 Here is the logs;
 
 2014-02-26 16:02:39,698 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 1.0.4
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
 /
 2014-02-26 16:02:40,005 INFO
 org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
 hadoop-metrics2.properties
 2014-02-26 16:02:40,019 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 MetricsSystem,sub=Stats registered.
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
 period at 10 second(s).
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
 started
 2014-02-26 16:02:40,169 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
 registered.
 2014-02-26 16:02:40,193 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
 registered.
 2014-02-26 16:02:40,194 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 NameNode registered.
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
   = 64-bit
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
 memory = 17.77875 MB
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
  = 2^21 = 2097152 entries
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 recommended=2097152, actual=2097152
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
 2014-02-26 16:02:40,274 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isPermissionEnabled=true
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 dfs.block.invalidate.limit=100
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
 accessTokenLifetime=0 min(s)
 2014-02-26 16:02:40,724 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
 FSNamesystemStateMBean and NameNodeMXBean
 2014-02-26 16:02:40,749 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
 occuring more than 10 times
 2014-02-26 

Re: I am about to lose all my data please help

2014-03-17 Thread Stanley Shi
one possible reason is that you didn't set the namenode working directory,
by default it's in /tmp folder; and the /tmp folder might get deleted
by the OS without any notification. If this is the case, I am afraid you
have lost all your namenode data.

*property
  namedfs.name.dir/name
  value${hadoop.tmp.dir}/dfs/name/value
  descriptionDetermines where on the local filesystem the DFS name node
  should store the name table(fsimage).  If this is a comma-delimited list
  of directories then the name table is replicated in all of the
  directories, for redundancy. /description
/property*


Regards,
*Stanley Shi,*



On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf mirko.kae...@gmail.com wrote:

 Hi,

 what is the location of the namenodes fsimage and editlogs?
 And how much memory has the NameNode.

 Did you work with a Secondary NameNode or a Standby NameNode for
 checkpointing?

 Where are your HDFS blocks located, are those still safe?

 With this information at hand, one might be able to fix your setup, but do
 not format the old namenode before
 all is working with a fresh one.

 Grab a copy of the maintainance guide:
 http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
 which helps solving such type of problems as well.

 Best wishes
 Mirko


 2014-03-16 9:07 GMT+00:00 Fatih Haltas fatih.hal...@nyu.edu:

 Dear All,

 I have just restarted machines of my hadoop clusters. Now, I am trying to
 restart hadoop clusters again, but getting error on namenode restart. I am
 afraid of loosing my data as it was properly running for more than 3
 months. Currently, I believe if I do namenode formatting, it will work
 again, however, data will be lost. Is there anyway to solve this without
 losing the data.

 I will really appreciate any help.

 Thanks.


 =
 Here is the logs;
 
 2014-02-26 16:02:39,698 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 1.0.4
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
 /
 2014-02-26 16:02:40,005 INFO
 org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
 hadoop-metrics2.properties
 2014-02-26 16:02:40,019 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 MetricsSystem,sub=Stats registered.
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
 period at 10 second(s).
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
 started
 2014-02-26 16:02:40,169 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
 registered.
 2014-02-26 16:02:40,193 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
 registered.
 2014-02-26 16:02:40,194 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 NameNode registered.
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
 = 64-bit
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
 memory = 17.77875 MB
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
= 2^21 = 2097152 entries
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 recommended=2097152, actual=2097152
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
 2014-02-26 16:02:40,274 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isPermissionEnabled=true
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 dfs.block.invalidate.limit=100
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
 accessTokenLifetime=0 min(s)
 2014-02-26 16:02:40,724 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
 FSNamesystemStateMBean and NameNodeMXBean
 2014-02-26 16:02:40,749 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
 occuring more than 10 times
 2014-02-26 16:02:40,780 ERROR
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
 initialization failed.
 java.io.IOException: NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
 at
 

Re: I am about to lose all my data please help

2014-03-16 Thread Mirko Kämpf
Hi,

what is the location of the namenodes fsimage and editlogs?
And how much memory has the NameNode.

Did you work with a Secondary NameNode or a Standby NameNode for
checkpointing?

Where are your HDFS blocks located, are those still safe?

With this information at hand, one might be able to fix your setup, but do
not format the old namenode before
all is working with a fresh one.

Grab a copy of the maintainance guide:
http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
which helps solving such type of problems as well.

Best wishes
Mirko


2014-03-16 9:07 GMT+00:00 Fatih Haltas fatih.hal...@nyu.edu:

 Dear All,

 I have just restarted machines of my hadoop clusters. Now, I am trying to
 restart hadoop clusters again, but getting error on namenode restart. I am
 afraid of loosing my data as it was properly running for more than 3
 months. Currently, I believe if I do namenode formatting, it will work
 again, however, data will be lost. Is there anyway to solve this without
 losing the data.

 I will really appreciate any help.

 Thanks.


 =
 Here is the logs;
 
 2014-02-26 16:02:39,698 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 1.0.4
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
 /
 2014-02-26 16:02:40,005 INFO
 org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
 hadoop-metrics2.properties
 2014-02-26 16:02:40,019 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 MetricsSystem,sub=Stats registered.
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
 period at 10 second(s).
 2014-02-26 16:02:40,021 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
 started
 2014-02-26 16:02:40,169 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
 registered.
 2014-02-26 16:02:40,193 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
 registered.
 2014-02-26 16:02:40,194 INFO
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
 NameNode registered.
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
   = 64-bit
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
 memory = 17.77875 MB
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
= 2^21 = 2097152 entries
 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
 recommended=2097152, actual=2097152
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
 2014-02-26 16:02:40,273 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
 2014-02-26 16:02:40,274 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isPermissionEnabled=true
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 dfs.block.invalidate.limit=100
 2014-02-26 16:02:40,279 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
 isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
 accessTokenLifetime=0 min(s)
 2014-02-26 16:02:40,724 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
 FSNamesystemStateMBean and NameNodeMXBean
 2014-02-26 16:02:40,749 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
 occuring more than 10 times
 2014-02-26 16:02:40,780 ERROR
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
 initialization failed.
 java.io.IOException: NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
 at
 org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:362)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:496)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
 2014-02-26 16:02:40,781 ERROR
 org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
 NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
 at