17, 2015 4:28 PM
To: user@accumulo.apache.org
Subject: Re: Accumulo GC and Hadoop trash settings
Ok, I can the see the benefit of being able to recovery data. Is this process
documented? And is there any kind of user-friendly tool for it?
On Mon, Aug 17, 2015 at 4:11 PM, wrote:
It
to recover
in case your most recent checkpoint is corrupt.
*From: *"James Hughes" mailto:jn...@virginia.edu>>
*To: *user@accumulo.apache.org <mailto:user@accumulo.apache.org>
*Sent: *Monday, August 17
files around longer than {dfs.namenode.checkpoint.period}, then
> you have a chance to recover in case your most recent checkpoint is corrupt.
>
> --
> *From: *"James Hughes"
> *To: *user@accumulo.apache.org
> *Sent: *Monday, August 17, 2015
hes"
To: user@accumulo.apache.org
Sent: Monday, August 17, 2015 3:57:57 PM
Subject: Accumulo GC and Hadoop trash settings
Hi all,
>From reading about the Accumulo GC, it sounds like temporary files are
>routinely deleted during GC cycles. In a small testing environment, I've th
If something goes wrong (i.e. somebody accidentally issues a big delete),
then having the Trash around makes recovery plausible.
On Mon, Aug 17, 2015 at 2:57 PM, James Hughes wrote:
> Hi all,
>
> From reading about the Accumulo GC, it sounds like temporary files are
> routinely deleted during GC
Hi all,
>From reading about the Accumulo GC, it sounds like temporary files are
routinely deleted during GC cycles. In a small testing environment, I've
the HDFS Accumulo user's .Trash folder have 10s of gigabytes of data.
Is there any reason that the default value for gc.trash.ignore is false?