2008/7/31 Raghavendra G <[EMAIL PROTECTED]>:
> Hi,
>
> Can you do a _find . | xargs touch_ and check whether brick A is
> self-healed?
Strange thing. After a night all files appeared on brick A
but empty and with creation date jan 1 1970 and without any extended attributes.
Maybe slocate deamon to
Hi,
Can you do a _find . | xargs touch_ and check whether brick A is
self-healed?
regards,
On Thu, Jul 31, 2008 at 4:07 AM, Łukasz Osipiuk <[EMAIL PROTECTED]> wrote:
> Thanks for answers :)
>
> On Wed, Jul 30, 2008 at 8:52 PM, Martin Fick <[EMAIL PROTECTED]> wrote:
> > --- On Wed, 7/30/08, Łuka
Thanks for answers :)
On Wed, Jul 30, 2008 at 8:52 PM, Martin Fick <[EMAIL PROTECTED]> wrote:
> --- On Wed, 7/30/08, Łukasz Osipiuk <[EMAIL PROTECTED]> wrote:
>
[cut]
>> The more extreme example is: on of data bricks explodes and
>> You replace it with new one, configured as one which gone off
>
Kevan Benson wrote:
A while back I seem to remember someone talking about eventually
creating a fsck.glusterfs utility. Since underlying server node
corruption would (hopefully) not be a common problem, it seems like a
specific tool that could be run when prudent would be a good approach.
If
Previous quotes posts removed for brevity...
Martin Fick wrote:
It does seem like it would be fairly easy to add another
metadata attribute to each file/directory that would hold
a checksum for it. This way, AFR itself could be
configured to check/compute the checksum anytime the file
is rea
--- On Wed, 7/30/08, Łukasz Osipiuk <[EMAIL PROTECTED]> wrote:
>> Step1: Client1: cp test_file.txt /mnt/gluster/
>> Step2: Brick1 and Brick4: has test_file.txt in
>> /mnt/gluster/ directory
>> Sept3: Client1: ls /mnt/gluster - test_file.txt is
present
>>
>> Step4: Brick1: rm /mnt/gluster/test_fil
FYI, ls works okay in directories that only contain other directories. If
the directory contains files, it complains. After that, the namespace
glusterfsd processes seem to hang altogether and require a kill -9.
Thanks,
Brent
On Wed, 30 Jul 2008, Brent A Nelson wrote:
On Wed, 30 Jul 2008,
Hello,
As I am new to the group let me introduce myself. My name is Lukasz
Osipiuk, and I am
software developer i large Polish IT company. We are considering using
GlusterFS for data storage, and
we have to minimize the probablility of loosing any data.
The following email should be in thread "Se
On Wed, 30 Jul 2008, Vikas Gorur wrote:
Brent,
Thanks for pin-pointing the patch. I tried to reproduce this with an
AFR+Unify setup. However, I haven't been able to yet. How easy is it to
reproduce this? Which operations did you do before it screwed up?
Right after mounting, I find that df w
Excerpts from Brent A Nelson's message of Wed Jul 30 00:05:13 +0530 2008:
> I did a few tla replay --reverse operations and found that patch level 258
> works fine (except for previously reported fchmod and acl issues). replay
> to 259, and it breaks as below. The posix cleanup patch breaks in m
10 matches
Mail list logo