Jean-Marc:
Can you confirm that the Jira Jesse logged reflects your case ?
Thanks
On Dec 30, 2012, at 4:13 PM, Jesse Yates wrote:
> Hey,
>
> So the point of all the delete code in the cleaner is to try and delete
> each of the files in the directory and then delete the directory, assuming
> it
directory doesn't exist because we can't really
>>> delete something which doesn't exist...
>>>
>>> My opinion.
>>>
>>> So the patch is ready, easy one ;) Just waiting for Jesse's feedback
>>> just in case.
>>>
>&g
Hey,
So the point of all the delete code in the cleaner is to try and delete
each of the files in the directory and then delete the directory, assuming
its empty- it shouldn't leak the IOException if it the directory is found
to be empty and then gets a file added.
This is really odd though, as f
gt; My opinion.
>>
>> So the patch is ready, easy one ;) Just waiting for Jesse's feedback
>> just in case.
>>
>> JM
>>
>> 2012/12/30, lars hofhansl :
>>> Nothing has changed around this in 0.94.4 as far as I know.
>>>
>>>
>>
> 2012/12/30, lars hofhansl :
>> Nothing has changed around this in 0.94.4 as far as I know.
>>
>>
>>
>>
>> ____
>> From: Jean-Marc Spaggiari
>> To: user@hbase.apache.org
>> Sent: Sunday, December 30, 2012 9:53 AM
>> Subject:
ansl :
> Nothing has changed around this in 0.94.4 as far as I know.
>
>
>
>
>
> From: Jean-Marc Spaggiari
> To: user@hbase.apache.org
> Sent: Sunday, December 30, 2012 9:53 AM
> Subject: Re: CleanerChore exception
>
> I was goi
Nothing has changed around this in 0.94.4 as far as I know.
From: Jean-Marc Spaggiari
To: user@hbase.apache.org
Sent: Sunday, December 30, 2012 9:53 AM
Subject: Re: CleanerChore exception
I was going to move to 0.94.4 today ;) And yes I'm using 0.94
The Javadoc is saying:
"@return true if the directory was deleted, false otherwise"
So I think the line "return canDeleteThis ? fs.delete(toCheck, false)
: false;" is still correct. It's retuning false if the directory has
not been deleted.
There is no exception here. If the TTL for a file had n
Thanks for the confirmation.
Also, seems that there is no test class related to
checkAndDeleteDirectory. It might be good to add that too.
I have extracted 0.94.3 0.94.4RC0 and the trunk and they are all
identical for this methode.
I will try to do some modifications and see the results...
So f
Looking at this line in checkAndDeleteDirectory():
return canDeleteThis ? fs.delete(toCheck, false) : false;
If fs.delete() returns false, meaning the deletion was unsuccessful, the
parent directory tree wouldn't be deleted. I think this is inconsistent
with the javadoc for checkAndDeleteDirect
Thanks for the digging. This concurs with my suspicion in the beginning.
I am copying Jesse who wrote the code. He should have more insight on this.
After his confirmation, you can log a JIRA.
Cheers
On Sun, Dec 30, 2012 at 10:59 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> So.
So. Looking deeper I found few things.
First, why checkAndDeleteDirectory is not "simply" calling
FSUtils.delete (fs, toCheck, true)? I guess it's doing the same thing?
Also, FSUtils.listStatus(fs, toCheck, null); will return null if there
is no status. Not just an empty array. And it's returning
Regargind the logcleaner settings, I have not changed anything. It's
what came with the initial install. So I don't have anything setup for
this plugin in my configuration files.
For the files on the FS, here is what I have:
hadoop@node3:~/hadoop-1.0.3$ bin/hadoop fs -ls /hbase/.archive/entry_dupl
The exception came from this line:
if (file.isDir()) checkAndDeleteDirectory(file.getPath());
Looking at checkAndDeleteDirectory(), it recursively deletes files and
directories under the specified path.
Does /hbase/.archive/entry_duplicate only contain empty directories
underneath it ?
I was going to move to 0.94.4 today ;) And yes I'm using 0.94.3. I
might wait a bit in case some testing is required with my version.
Is this what you are looking for? http://pastebin.com/N8Q0FMba
I will keep the files for now since it seems it's not causing any
major issue. That will allow some
Looks like you're using 0.94.3
The archiver is backport of:
HBASE-5547, Don't delete HFiles in backup mode
Can you provide more the log where the IOE was reported using pastebin ?
Thanks
On Sun, Dec 30, 2012 at 9:08 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi,
>
> I have a "
Hi,
I have a "IOException" /hbase/.archive/table_name is non empty
exception every minute on my logs.
There is 30 directories under this directory. the main directory is
from yesterday, but all sub directories are from December 10th, all
the same time.
What does this .archive directory is used f
17 matches
Mail list logo