Hi Anuradha,
Please confirm me, this is bug in glusterfs or we need to do something at
our end.
Because this problem is stopping our development.
Regards,
Abhishek
On Thu, Mar 17, 2016 at 1:54 PM, ABHISHEK PALIWAL
wrote:
> Hi Anuradha,
>
> But in this case I need to
On Wed, Mar 16, 2016 at 11:59 PM, Atin Mukherjee wrote:
> -Atin
> Sent from one plus one
>
> On 16-Mar-2016 11:32 am, "Raghavendra Talur" wrote:
> >
> > Hi,
> >
> > Lot many fixes to tests were found to be not back ported to 3.7 and
> other
- Original Message -
> From: "Anuradha Talur"
> To: "ABHISHEK PALIWAL"
> Cc: gluster-us...@gluster.org, gluster-devel@gluster.org
> Sent: Wednesday, March 16, 2016 5:32:26 PM
> Subject: Re: [Gluster-users] gluster volume heal info split brain
On Wed, Mar 09, 2016 at 08:26:44PM +0530, M S Vishwanath Bhat wrote:
> On 9 March 2016 at 19:39, Kaushal M wrote:
>
> > On Wed, Mar 9, 2016 at 7:02 PM, M S Vishwanath Bhat
> > wrote:
> > > Hi,
> > >
> > > When we were discussing about the readiness of
And for 256b inode:
(597904 - 33000) / (1066036 - 23) == 530 bytes per inode.
So I still consider 1k to be good estimation for average workload.
Regards,
Oleksandr.
On четвер, 17 березня 2016 р. 09:58:14 EET Ravishankar N wrote:
> Looks okay to me Oleksandr. You might want to make a github
[1] changes the retry behavior of write-behind in case of flush failures. Do
you think it needs to be called out in release notes?
[1] http://review.gluster.org/12594
regards,
Raghavendra
- Original Message -
> From: "Vijay Bellur"
> To: "Gluster Devel"
Ravi, I will definitely arrange the results into some short handy
document and post it here.
Also, @JoeJulian on IRC suggested me to perform this test on XFS bricks
with inode size of 256b and 1k:
===
22:38 <@JoeJulian> post-factum: Just wondering what 256 byte inodes
might look like for
OK, I've repeated the test with the following hierarchy:
* 10 top-level folders with 10 second-level folders each;
* 10 000 files in each second-level folder.
So, this composes 10×10×1=1M files and 100 folders
Initial brick used space: 33 M
Initial inodes count: 24
After test:
* each
On Mar 17, 2016 7:50 AM, "Pranith Kumar Karampuri"
wrote:
>
>
>
> On 03/16/2016 11:46 PM, Raghavendra Talur wrote:
>>
>>
>>
>> On Wed, Mar 16, 2016 at 11:39 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>>>
>>>
>>>
>>> On 03/16/2016 11:31 AM, Raghavendra Talur
- Original Message -
> From: "ABHISHEK PALIWAL"
> To: "Anuradha Talur"
> Cc: gluster-us...@gluster.org, gluster-devel@gluster.org
> Sent: Thursday, March 17, 2016 4:00:58 PM
> Subject: Re: [Gluster-users] gluster volume heal info split brain
On Fri, Mar 18, 2016 at 09:08:04AM -0400, Prasanna Kumar Kalever wrote:
gluster volume top $V0 open | grep -w "$F0" >/dev/null 2>&1
TEST [ $? -eq 0 ];
What do we expect here and what do we get?
I note that the test fails either if glustrer volume top fails,
ot if its output does not contain
Hi Anuradha,
But in this case I need to do tail on each file which is time taking
process and other end I can't pause my module until these file is getting
healed.
Any how I need the output of the split-brain to resolve this problem.
Regards,
Abhishek
On Wed, Mar 16, 2016 at 6:21 PM, ABHISHEK
Hi Anuradha,
The issue is resolved but we have one more issue something similar to this
one in which the file is not getting sync after the steps followed,
mentioned in the link which you shared in the previous mail.
And problem is that why split-brain command is not showing split-brain
entries.
On Thursday, March 17, 2016 11:25:33 AM, Jiffin Tony Thottan wrote:
> On 16/03/16 17:10, Prasanna Kumar Kalever wrote:
> > On Wednesday, March 16, 2016 3:55:52 PM, Niels de Vos wrote:
> >> On Wed, Mar 16, 2016 at 04:00:12AM -0400, Prasanna Kumar Kalever wrote:
> >>> Hi,
> >>>
> >>> Regarding
- Original Message -
From: "M S Vishwanath Bhat"
To: "Niels de Vos"
Cc: "Gluster Devel"
Sent: Thursday, March 17, 2016 8:18:23 AM
Subject: Re: [Gluster-devel] Location of distaf tests
On 17 March 2016 at 10:50,
Can you paste the link here ?
-Prasanna
- Original Message -
> From: "Sakshi Bansal"
> To: "Prasanna Kumar Kalever"
> Cc: "Gluster Devel"
> Sent: Wednesday, March 16, 2016 3:35:04 PM
> Subject: Re:
-Atin
Sent from one plus one
On 16-Mar-2016 11:32 am, "Raghavendra Talur" wrote:
>
> Hi,
>
> Lot many fixes to tests were found to be not back ported to 3.7 and other
release branches.
> This causes tests to fail only in those branches and leaves the
maintainers puzzled.
>
>
-Atin
Sent from one plus one
On 17-Mar-2016 12:02 am, "Raghavendra Talur" wrote:
>
>
>
> On Wed, Mar 16, 2016 at 11:59 PM, Atin Mukherjee <
atin.mukherje...@gmail.com> wrote:
>>
>> -Atin
>> Sent from one plus one
>>
>>
>> On 16-Mar-2016 11:32 am, "Raghavendra Talur"
On Wednesday, March 16, 2016 3:55:52 PM, Niels de Vos wrote:
> On Wed, Mar 16, 2016 at 04:00:12AM -0400, Prasanna Kumar Kalever wrote:
> > Hi,
> >
> > Regarding existing file snapshot:
> >
> > Currently we have qemu-block xlator which helps in creating a file
On Wed, Mar 16, 2016 at 11:31 AM, Raghavendra Talur
wrote:
> Hi,
>
> Lot many fixes to tests were found to be not back ported to 3.7 and other
> release branches.
> This causes tests to fail only in those branches and leaves the
> maintainers puzzled.
>
> Also, this seems to
Hi Folks,
There is a review posted, http://review.gluster.org/#/c/12250, to which I
tacked on a review comment for an update to the replace-brick command. The gist
of it is at https://gist.github.com/portante/248407dbfb29c2515fc3
What do folks think of such a proposal?
Thanks!
-peter
21 matches
Mail list logo