Panfei
> we stop the namenode and datanodes
This is also really hacky, but if all else fails...
It still may be too late, but if you are only running one datanode, you could 
look at your hdfs-site.xml, find the property named "dfs.data.dir", and go to 
that directory. Look around under there and see if the blocks still contain 
your data. Depending on how big your data was and how much other data you have 
in the filesystem, you may be able to piece your deleted data together.
: Eric Payne

      From: Wei-Chiu Chuang <weic...@apache.org>
 To: panfei <cnwe...@gmail.com> 
Cc: Hdfs-dev <hdfs-dev@hadoop.apache.org>
 Sent: Friday, August 4, 2017 7:57 AM
 Subject: Re: How to restore data from HDFS rm -skipTrash
   
If the directory has snapshot enabled, the file can be retrieved from the
past snapshots.

Otherwise, the file inodes are removed from namenode metadata, and blocks
are scheduled for deletion.
You might want to play with edit log a bit. Remove the delete entries from
edit logs. But it's hacky and does not guarantee the blocks are still there.


On Thu, Aug 3, 2017 at 8:38 PM, panfei <cnwe...@gmail.com> wrote:

> ---------- Forwarded message ----------
> From: panfei <cnwe...@gmail.com>
> Date: 2017-08-04 11:23 GMT+08:00
> Subject: How to restore data from HDFS rm -skipTrash
> To: CDH Users <cdh-u...@cloudera.org>
>
>
> some one mistakenly do a rm -skipTrash operation on the HDFS, but we stop
> the namenode and datanodes immediately. (CDH 5.4.5)
>
> I want to know is there any way to stop the deletion process ?
>
> and how ?
>
> thanks very in advance.
>



-- 
A very happy Hadoop contributor


   

Reply via email to