RE: Protect from accidental deletes

2013-04-02 Thread ramon.pin
Hi Artem, right now HDFS has a trash functionality that moves files removed with 'hadoop dfs -rm' to an intermediate directory (/trash). You can configure how may time a file spends in that directory before it's actually removed from the filesystem. Look for 'fs.trash.interval' on your hdfs-s

Re: Protect from accidental deletes

2013-04-01 Thread kojie . fu
you can set the property " fs.trash.interval" From: Artem Ervits Date: 2013-04-02 05:04 To: common-user@hadoop.apache.org Subject: Protect from accidental deletes Hello all, I'd like to know what users are doing to protect themselves from accidental deletes of files and directories in HDFS?