Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Lian, George (NSB - CN/Hangzhou)
Hi,

I suppose the zero filled attr is for performance consider to NFS, but for 
fuse, it will lead issue such like hard LINK FOP,
So I suggest could we add 2 attr field in the endof "struct iatt {", such like 
ia_fuse_nlink, ia_fuse_ctime,
And in function gf_zero_fill_stat , saved the ia_nlink, ia_ctime to 
ia_fuse_nlink,ia_fuse_ctime before set its to zero,
And restore it to valued nlink and ctime in function gf_fuse_stat2attr, 
So that kernel could get the correct nlink and ctime.

Is it a considerable solution? Any risk?

Please share your comments, thanks in advance!

Best Regards,
George

-Original Message-
From: gluster-devel-boun...@gluster.org 
[mailto:gluster-devel-boun...@gluster.org] On Behalf Of Niels de Vos
Sent: Wednesday, January 24, 2018 7:43 PM
To: Pranith Kumar Karampuri 
Cc: Lian, George (NSB - CN/Hangzhou) ; Zhou, 
Cynthia (NSB - CN/Hangzhou) ; Li, Deqian (NSB - 
CN/Hangzhou) ; Gluster-devel@gluster.org; Sun, Ping 
(NSB - CN/Hangzhou) 
Subject: Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"

On Wed, Jan 24, 2018 at 02:24:06PM +0530, Pranith Kumar Karampuri wrote:
> hi,
>In the same commit you mentioned earlier, there was this code
> earlier:
> -/* Returns 1 if the stat seems to be filled with zeroes. */ -int 
> -nfs_zero_filled_stat (struct iatt *buf) -{
> -if (!buf)
> -return 1;
> -
> -/* Do not use st_dev because it is transformed to store the xlator
> id
> - * in place of the device number. Do not use st_ino because by
> this time
> - * we've already mapped the root ino to 1 so it is not guaranteed
> to be
> - * 0.
> - */
> -if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> -return 1;
> -
> -return 0;
> -}
> -
> -
> 
> I moved this to a common library function that can be used in afr as well.
> Why was it there in NFS? +Niels for answering that question.

Sorry, I dont know why that was done. It was introduced with the initial gNFS 
implementation, long before I started to work with Gluster. The only reference 
I have is this from
xlators/nfs/server/src/nfs3-helpers.c:nfs3_stat_to_post_op_attr()

 371 /* Some performance translators return zero-filled stats when they
 372  * do not have up-to-date attributes. Need to handle this by not
 373  * returning these zeroed out attrs.
 374  */

This may not be true for the current situation anymore.

HTH,
Niels


> 
> If I give you a patch which will assert the error condition, would it 
> be possible for you to figure out the first xlator which is unwinding 
> the iatt with nlink count as zero but ctime as non-zero?
> 
> On Wed, Jan 24, 2018 at 1:03 PM, Lian, George (NSB - CN/Hangzhou) < 
> george.l...@nokia-sbell.com> wrote:
> 
> > Hi,  Pranith Kumar,
> >
> >
> >
> > Can you tell me while need set buf->ia_nlink to “0”in function 
> > gf_zero_fill_stat(), which API or Application will care it?
> >
> > If I remove this line and also update corresponding in function 
> > gf_is_zero_filled_stat,
> >
> > The issue seems gone, but I can’t confirm will lead to other issues.
> >
> >
> >
> > So could you please double check it and give your comments?
> >
> >
> >
> > My change is as the below:
> >
> >
> >
> > gf_boolean_t
> >
> > gf_is_zero_filled_stat (struct iatt *buf)
> >
> > {
> >
> > if (!buf)
> >
> > return 1;
> >
> >
> >
> > /* Do not use st_dev because it is transformed to store the 
> > xlator id
> >
> >  * in place of the device number. Do not use st_ino because 
> > by this time
> >
> >  * we've already mapped the root ino to 1 so it is not 
> > guaranteed to be
> >
> >  * 0.
> >
> >  */
> >
> > //if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> >
> > if (buf->ia_ctime == )
> >
> > return 1;
> >
> >
> >
> > return 0;
> >
> > }
> >
> >
> >
> > void
> >
> > gf_zero_fill_stat (struct iatt *buf)
> >
> > {
> >
> > //   buf->ia_nlink = 0;
> >
> > buf->ia_ctime = 0;
> >
> > }
> >
> >
> >
> > Thanks & Best Regards
> >
> > George
> >
> > *From:* Lian, George (NSB - CN/Hangzhou)
> > *Sent:* Friday, January 19, 2018 10:03 AM
> > *To:* Pranith Kumar Karampuri ; Zhou, Cynthia 
> > (NSB -
> > CN/Hangzhou) 
> > *Cc:* Li, Deqian (NSB - CN/Hangzhou) ; 
> > Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) < 
> > ping@nokia-sbell.com>
> >
> > *Subject:* RE: [Gluster-devel] a link issue maybe introduced in a 
> > bug fix " Don't let NFS cache stat after writes"
> >
> >
> >
> > Hi,
> >
> > >>> Cool, this works for me too. Send me a mail off-list once you 
> > >>> are
> > available and we can figure out a way to get into a 

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
On 25 Jan 2018 8:43 am, "Lian, George (NSB - CN/Hangzhou)" <
george.l...@nokia-sbell.com> wrote:

Hi,

I suppose the zero filled attr is for performance consider to NFS, but for
fuse, it will lead issue such like hard LINK FOP,
So I suggest could we add 2 attr field in the endof "struct iatt {", such
like ia_fuse_nlink, ia_fuse_ctime,
And in function gf_zero_fill_stat , saved the ia_nlink, ia_ctime to
ia_fuse_nlink,ia_fuse_ctime before set its to zero,
And restore it to valued nlink and ctime in function gf_fuse_stat2attr,
So that kernel could get the correct nlink and ctime.

Is it a considerable solution? Any risk?

Please share your comments, thanks in advance!


Adding csaba for helping with this.


Best Regards,
George

-Original Message-
From: gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
gluster.org] On Behalf Of Niels de Vos
Sent: Wednesday, January 24, 2018 7:43 PM
To: Pranith Kumar Karampuri 
Cc: Lian, George (NSB - CN/Hangzhou) ; Zhou,
Cynthia (NSB - CN/Hangzhou) ; Li, Deqian (NSB
- CN/Hangzhou) ; Gluster-devel@gluster.org; Sun,
Ping (NSB - CN/Hangzhou) 
Subject: Re: [Gluster-devel] a link issue maybe introduced in a bug fix "
Don't let NFS cache stat after writes"

On Wed, Jan 24, 2018 at 02:24:06PM +0530, Pranith Kumar Karampuri wrote:
> hi,
>In the same commit you mentioned earlier, there was this code
> earlier:
> -/* Returns 1 if the stat seems to be filled with zeroes. */ -int
> -nfs_zero_filled_stat (struct iatt *buf) -{
> -if (!buf)
> -return 1;
> -
> -/* Do not use st_dev because it is transformed to store the
xlator
> id
> - * in place of the device number. Do not use st_ino because by
> this time
> - * we've already mapped the root ino to 1 so it is not guaranteed
> to be
> - * 0.
> - */
> -if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> -return 1;
> -
> -return 0;
> -}
> -
> -
>
> I moved this to a common library function that can be used in afr as well.
> Why was it there in NFS? +Niels for answering that question.

Sorry, I dont know why that was done. It was introduced with the initial
gNFS implementation, long before I started to work with Gluster. The only
reference I have is this from
xlators/nfs/server/src/nfs3-helpers.c:nfs3_stat_to_post_op_attr()

 371 /* Some performance translators return zero-filled stats when
they
 372  * do not have up-to-date attributes. Need to handle this by
not
 373  * returning these zeroed out attrs.
 374  */

This may not be true for the current situation anymore.

HTH,
Niels


>
> If I give you a patch which will assert the error condition, would it
> be possible for you to figure out the first xlator which is unwinding
> the iatt with nlink count as zero but ctime as non-zero?
>
> On Wed, Jan 24, 2018 at 1:03 PM, Lian, George (NSB - CN/Hangzhou) <
> george.l...@nokia-sbell.com> wrote:
>
> > Hi,  Pranith Kumar,
> >
> >
> >
> > Can you tell me while need set buf->ia_nlink to “0”in function
> > gf_zero_fill_stat(), which API or Application will care it?
> >
> > If I remove this line and also update corresponding in function
> > gf_is_zero_filled_stat,
> >
> > The issue seems gone, but I can’t confirm will lead to other issues.
> >
> >
> >
> > So could you please double check it and give your comments?
> >
> >
> >
> > My change is as the below:
> >
> >
> >
> > gf_boolean_t
> >
> > gf_is_zero_filled_stat (struct iatt *buf)
> >
> > {
> >
> > if (!buf)
> >
> > return 1;
> >
> >
> >
> > /* Do not use st_dev because it is transformed to store the
> > xlator id
> >
> >  * in place of the device number. Do not use st_ino because
> > by this time
> >
> >  * we've already mapped the root ino to 1 so it is not
> > guaranteed to be
> >
> >  * 0.
> >
> >  */
> >
> > //if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> >
> > if (buf->ia_ctime == )
> >
> > return 1;
> >
> >
> >
> > return 0;
> >
> > }
> >
> >
> >
> > void
> >
> > gf_zero_fill_stat (struct iatt *buf)
> >
> > {
> >
> > //   buf->ia_nlink = 0;
> >
> > buf->ia_ctime = 0;
> >
> > }
> >
> >
> >
> > Thanks & Best Regards
> >
> > George
> >
> > *From:* Lian, George (NSB - CN/Hangzhou)
> > *Sent:* Friday, January 19, 2018 10:03 AM
> > *To:* Pranith Kumar Karampuri ; Zhou, Cynthia
> > (NSB -
> > CN/Hangzhou) 
> > *Cc:* Li, Deqian (NSB - CN/Hangzhou) ;
> > Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
> > ping@nokia-sbell.com>
> >
> > *Subject:* RE: [Gluster-devel] a link issue maybe introduced in a
> > bug fix " Don't let NFS cache stat after writes"
> >
> >
> >
> > Hi,
> >
> > >>> Cool, this works for 

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Lian, George (NSB - CN/Hangzhou)
Hi,  Pranith Kumar,

Can you tell me while need set buf->ia_nlink to “0”in function 
gf_zero_fill_stat(), which API or Application will care it?
If I remove this line and also update corresponding in function 
gf_is_zero_filled_stat,
The issue seems gone, but I can’t confirm will lead to other issues.

So could you please double check it and give your comments?

My change is as the below:

gf_boolean_t
gf_is_zero_filled_stat (struct iatt *buf)
{
if (!buf)
return 1;

/* Do not use st_dev because it is transformed to store the xlator id
 * in place of the device number. Do not use st_ino because by this time
 * we've already mapped the root ino to 1 so it is not guaranteed to be
 * 0.
 */
//if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
if (buf->ia_ctime == )
return 1;

return 0;
}

void
gf_zero_fill_stat (struct iatt *buf)
{
//   buf->ia_nlink = 0;
buf->ia_ctime = 0;
}

Thanks & Best Regards
George
From: Lian, George (NSB - CN/Hangzhou)
Sent: Friday, January 19, 2018 10:03 AM
To: Pranith Kumar Karampuri ; Zhou, Cynthia (NSB - 
CN/Hangzhou) 
Cc: Li, Deqian (NSB - CN/Hangzhou) ; 
Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) 

Subject: RE: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"

Hi,
>>> Cool, this works for me too. Send me a mail off-list once you are available 
>>> and we can figure out a way to get into a call and work on this.

Have you reproduced the issue per the step I listed in 
https://bugzilla.redhat.com/show_bug.cgi?id=1531457 and last mail?

If not, I would like you could try it yourself , which the difference between 
yours and mine is just create only 2 bricks instead of 6 bricks.

And Cynthia could have a session with you if you needed when I am not available 
in next Monday and Tuesday.

Thanks & Best Regards,
George

From: 
gluster-devel-boun...@gluster.org 
[mailto:gluster-devel-boun...@gluster.org] On Behalf Of Pranith Kumar Karampuri
Sent: Thursday, January 18, 2018 6:03 PM
To: Lian, George (NSB - CN/Hangzhou) 
>
Cc: Zhou, Cynthia (NSB - CN/Hangzhou) 
>; Li, Deqian 
(NSB - CN/Hangzhou) 
>; 
Gluster-devel@gluster.org; Sun, Ping (NSB - 
CN/Hangzhou) >
Subject: Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"



On Thu, Jan 18, 2018 at 12:17 PM, Lian, George (NSB - CN/Hangzhou) 
> wrote:
Hi,
>>>I actually tried it with replica-2 and replica-3 and then distributed 
>>>replica-2 before replying to the earlier mail. We can have a debugging 
>>>session if you are okay with it.

It is fine if you can’t reproduce the issue in your ENV.
And I has attached the detail reproduce log in the Bugzilla FYI

But I am sorry I maybe OOO at Monday and Tuesday next week, so debug session 
will be fine to me at next Wednesday.

Cool, this works for me too. Send me a mail off-list once you are available and 
we can figure out a way to get into a call and work on this.



Paste the detail reproduce log FYI here:
root@ubuntu:~# gluster peer probe ubuntu
peer probe: success. Probe on localhost not needed
root@ubuntu:~# gluster v create test replica 2 ubuntu:/home/gfs/b1 
ubuntu:/home/gfs/b2 force
volume create: test: success: please start the volume to access data
root@ubuntu:~# gluster v start test
volume start: test: success
root@ubuntu:~# gluster v info test

Volume Name: test
Type: Replicate
Volume ID: fef5fca3-81d9-46d3-8847-74cde6f701a5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ubuntu:/home/gfs/b1
Brick2: ubuntu:/home/gfs/b2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
root@ubuntu:~# gluster v status
Status of volume: test
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick ubuntu:/home/gfs/b1   49152 0  Y   7798
Brick ubuntu:/home/gfs/b2   49153 0  Y   7818
Self-heal Daemon on localhost   N/A   N/AY   7839

Task Status of Volume test
--
There are no active volume tasks


root@ubuntu:~# gluster v set test cluster.consistent-metadata on
volume set: success

root@ubuntu:~# ls /mnt/test
ls: cannot 

Re: [Gluster-devel] Regression tests time

2018-01-24 Thread Xavi Hernandez
On Wed, Jan 24, 2018 at 3:11 PM, Jeff Darcy  wrote:

>
>
>
> On Tue, Jan 23, 2018, at 12:58 PM, Xavi Hernandez wrote:
>
> I've made some experiments [1] with the time that centos regression takes
> to complete. After some changes the time taken to run a full regression has
> dropped between 2.5 and 3.5 hours (depending on the run time of 2 tests,
> see below).
>
> Basically the changes are related with delays manually introduced in some
> places (sleeps in test files or even in the code, or delays in timer
> events). I've changed some sleeps with better ways to detect some
> condition, and I've left the delays in other places but with reduced time.
> Probably the used values are not the best ones in all cases, but it
> highlights that we should seriously consider how we detect things instead
> of simply waiting for some amount of time (and hope it's enough). The total
> test time is more than 2 hours less with these changes, so this means that
> >2 hours of the whole regression time is spent waiting unnecessarily.
>
>
> We should definitely try to detect specific conditions instead of just
> sleeping for a fixed amount of time. That said, sometimes it would take
> significant additional effort to add a marker for a condition plus code to
> check for it. We need to be *really* careful about changing timeouts in
> these cases. It's easy to come up with something that works on one
> development system and then causes spurious failures for others.
>

That happens when we use arbitrary delays. If we use an explicit check, it
will work on all systems. Additionally, using specific checks makes it
possible to define bigger timeouts to handle corner cases because in the
normal case we'll continue as soon as the check is satisfied, which will be
almost always. But if it really fails, on that particular cases it will
take some time to detect it, which is fine because this way we allow enough
time for "normal" delays.

One of the biggest problems I had to deal with when I implemented
> multiplexing was these kinds of timing dependencies in tests, and I had to
> go through it all again when I came to Facebook. While I applaud the effort
> to reduce single-test times, I believe that parallelizing tests will
> long-term be a more effective (and definitely safer) route to reducing
> overall latency.
>

I agree that parallelizing tests is the way to go, but if we reduce the
total time to 50%, the parallelized tests will also take 50% less of the
time.

Additionally, reducing the time it takes to do each test, is a good way to
detect corner cases. If we always sleep in some cases, we could be missing
some failures that can happen if there's no sleep (and users can do the
same requests than us but without sleeping).


> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Shyam Ranganathan
On 01/10/2018 07:07 AM, Pranith Kumar Karampuri wrote:
> When stat is "zero filled", understanding is that the higher layer
> protocol doesn't send stat value to the kernel and a separate lookup is
> sent by the kernel to get the latest stat value. In which protocol are
> you seeing this issue? Fuse/NFS/SMB?



I was unaware of this assumption. I had been poking around some code for
RIO as we return only partial iatt in some cases and wanted to check the
stack to see what the assumptions are. What I found then (unfortunately
no ready reference is available), is that such an assumption was not
present.

Going forward though, the new iatt implementation (motivated by the
kernel statx implementation) should make this explicit with it's
ia_flags in place (see [1]).

I hope we adopt this one sooner, so that ambiguity on certain field
values being 0/NULL or not present is not a deterministic factor in
trusting the iatt or not.

Shyam

[1] new iatt in 4.0 version of the protocol:
https://review.gluster.org/#/c/18850/6/libglusterfs/src/iatt.h
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression tests time

2018-01-24 Thread Jeff Darcy



On Tue, Jan 23, 2018, at 12:58 PM, Xavi Hernandez wrote:
> I've made some experiments [1] with the time that centos regression
> takes to complete. After some changes the time taken to run a full
> regression has dropped between 2.5 and 3.5 hours (depending on the run
> time of 2 tests, see below).> 
> Basically the changes are related with delays manually introduced in
> some places (sleeps in test files or even in the code, or delays in
> timer events). I've changed some sleeps with better ways to detect
> some condition, and I've left the delays in other places but with
> reduced time. Probably the used values are not the best ones in all
> cases, but it highlights that we should seriously consider how we
> detect things instead of simply waiting for some amount of time (and
> hope it's enough). The total test time is more than 2 hours less with
> these changes, so this means that >2 hours of the whole regression
> time is spent waiting unnecessarily.
We should definitely try to detect specific conditions instead of just
sleeping for a fixed amount of time. That said, sometimes it would take
significant additional effort to add a marker for a condition plus code
to check for it. We need to be *really* careful about changing timeouts
in these cases. It's easy to come up with something that works on one
development system and then causes spurious failures for others. One of
the biggest problems I had to deal with when I implemented multiplexing
was these kinds of timing dependencies in tests, and I had to go through
it all again when I came to Facebook. While I applaud the effort to
reduce single-test times, I believe that parallelizing tests will long-
term be a more effective (and definitely safer) route to reducing
overall latency.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Niels de Vos
On Wed, Jan 24, 2018 at 02:24:06PM +0530, Pranith Kumar Karampuri wrote:
> hi,
>In the same commit you mentioned earlier, there was this code
> earlier:
> -/* Returns 1 if the stat seems to be filled with zeroes. */
> -int
> -nfs_zero_filled_stat (struct iatt *buf)
> -{
> -if (!buf)
> -return 1;
> -
> -/* Do not use st_dev because it is transformed to store the xlator
> id
> - * in place of the device number. Do not use st_ino because by
> this time
> - * we've already mapped the root ino to 1 so it is not guaranteed
> to be
> - * 0.
> - */
> -if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> -return 1;
> -
> -return 0;
> -}
> -
> -
> 
> I moved this to a common library function that can be used in afr as well.
> Why was it there in NFS? +Niels for answering that question.

Sorry, I dont know why that was done. It was introduced with the initial
gNFS implementation, long before I started to work with Gluster. The
only reference I have is this from
xlators/nfs/server/src/nfs3-helpers.c:nfs3_stat_to_post_op_attr()

 371 /* Some performance translators return zero-filled stats when they
 372  * do not have up-to-date attributes. Need to handle this by not
 373  * returning these zeroed out attrs.
 374  */

This may not be true for the current situation anymore.

HTH,
Niels


> 
> If I give you a patch which will assert the error condition, would it be
> possible for you to figure out the first xlator which is unwinding the iatt
> with nlink count as zero but ctime as non-zero?
> 
> On Wed, Jan 24, 2018 at 1:03 PM, Lian, George (NSB - CN/Hangzhou) <
> george.l...@nokia-sbell.com> wrote:
> 
> > Hi,  Pranith Kumar,
> >
> >
> >
> > Can you tell me while need set buf->ia_nlink to “0”in function
> > gf_zero_fill_stat(), which API or Application will care it?
> >
> > If I remove this line and also update corresponding in function
> > gf_is_zero_filled_stat,
> >
> > The issue seems gone, but I can’t confirm will lead to other issues.
> >
> >
> >
> > So could you please double check it and give your comments?
> >
> >
> >
> > My change is as the below:
> >
> >
> >
> > gf_boolean_t
> >
> > gf_is_zero_filled_stat (struct iatt *buf)
> >
> > {
> >
> > if (!buf)
> >
> > return 1;
> >
> >
> >
> > /* Do not use st_dev because it is transformed to store the xlator
> > id
> >
> >  * in place of the device number. Do not use st_ino because by
> > this time
> >
> >  * we've already mapped the root ino to 1 so it is not guaranteed
> > to be
> >
> >  * 0.
> >
> >  */
> >
> > //if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> >
> > if (buf->ia_ctime == )
> >
> > return 1;
> >
> >
> >
> > return 0;
> >
> > }
> >
> >
> >
> > void
> >
> > gf_zero_fill_stat (struct iatt *buf)
> >
> > {
> >
> > //   buf->ia_nlink = 0;
> >
> > buf->ia_ctime = 0;
> >
> > }
> >
> >
> >
> > Thanks & Best Regards
> >
> > George
> >
> > *From:* Lian, George (NSB - CN/Hangzhou)
> > *Sent:* Friday, January 19, 2018 10:03 AM
> > *To:* Pranith Kumar Karampuri ; Zhou, Cynthia (NSB -
> > CN/Hangzhou) 
> > *Cc:* Li, Deqian (NSB - CN/Hangzhou) ;
> > Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
> > ping@nokia-sbell.com>
> >
> > *Subject:* RE: [Gluster-devel] a link issue maybe introduced in a bug fix
> > " Don't let NFS cache stat after writes"
> >
> >
> >
> > Hi,
> >
> > >>> Cool, this works for me too. Send me a mail off-list once you are
> > available and we can figure out a way to get into a call and work on this.
> >
> >
> >
> > Have you reproduced the issue per the step I listed in
> > https://bugzilla.redhat.com/show_bug.cgi?id=1531457 and last mail?
> >
> >
> >
> > If not, I would like you could try it yourself , which the difference
> > between yours and mine is just create only 2 bricks instead of 6 bricks.
> >
> >
> >
> > And Cynthia could have a session with you if you needed when I am not
> > available in next Monday and Tuesday.
> >
> >
> >
> > Thanks & Best Regards,
> >
> > George
> >
> >
> >
> > *From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
> > gluster.org ] *On Behalf Of *Pranith
> > Kumar Karampuri
> > *Sent:* Thursday, January 18, 2018 6:03 PM
> > *To:* Lian, George (NSB - CN/Hangzhou) 
> > *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) ;
> > Li, Deqian (NSB - CN/Hangzhou) ;
> > Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
> > ping@nokia-sbell.com>
> > *Subject:* Re: [Gluster-devel] a link issue maybe introduced in a bug fix
> > " Don't let NFS cache stat after writes"
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Jan 18, 2018 at 12:17 PM, Lian, George 

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Lian, George (NSB - CN/Hangzhou)
So I suppose ctime is enough to consider it whether a good iatt or not.
And why we also include ia_nlink in function gf_zero_fill_stat and 
gf_is_zero_filled_stat ?

From my investigation, if set ia_nlink to 0, if kernel read the attr with flag 
RCU, kernel will check the ia_nlink field, when do LINK operation, it will lead 
to error of “files is not exist”.

if (inode->i_nlink == 0 && !(inode->i_state & I_LINKABLE))
 error =  -ENOENT;


Best Regards,
George

From: gluster-devel-boun...@gluster.org 
[mailto:gluster-devel-boun...@gluster.org] On Behalf Of Pranith Kumar Karampuri
Sent: Wednesday, January 24, 2018 4:15 PM
To: Lian, George (NSB - CN/Hangzhou) 
Cc: Zhou, Cynthia (NSB - CN/Hangzhou) ; 
Gluster-devel@gluster.org; Li, Deqian (NSB - CN/Hangzhou) 
; Sun, Ping (NSB - CN/Hangzhou) 

Subject: Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"

If ctime is zero, no xlator should consider it as a good iatt. The fact that 
this is happening means some xlator is not doing proper checks in code. We need 
to find what that xlator is and fix it. Internet in our new office is not 
working so I'm not able to have call today with you guys. What I would do is to 
put logs in lookup, link, fstat, stat calls to see if anyone unwound iatt with 
ia_nlink count as zero but ctime as nonzero.

On 24 Jan 2018 1:03 pm, "Lian, George (NSB - CN/Hangzhou)" 
> wrote:
Hi,  Pranith Kumar,

Can you tell me while need set buf->ia_nlink to “0”in function 
gf_zero_fill_stat(), which API or Application will care it?
If I remove this line and also update corresponding in function 
gf_is_zero_filled_stat,
The issue seems gone, but I can’t confirm will lead to other issues.

So could you please double check it and give your comments?

My change is as the below:

gf_boolean_t
gf_is_zero_filled_stat (struct iatt *buf)
{
if (!buf)
return 1;

/* Do not use st_dev because it is transformed to store the xlator id
 * in place of the device number. Do not use st_ino because by this time
 * we've already mapped the root ino to 1 so it is not guaranteed to be
 * 0.
 */
//if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
if (buf->ia_ctime == )
return 1;

return 0;
}

void
gf_zero_fill_stat (struct iatt *buf)
{
//   buf->ia_nlink = 0;
buf->ia_ctime = 0;
}

Thanks & Best Regards
George
From: Lian, George (NSB - CN/Hangzhou)
Sent: Friday, January 19, 2018 10:03 AM
To: Pranith Kumar Karampuri >; 
Zhou, Cynthia (NSB - CN/Hangzhou) 
>
Cc: Li, Deqian (NSB - CN/Hangzhou) 
>; 
Gluster-devel@gluster.org; Sun, Ping (NSB - 
CN/Hangzhou) >

Subject: RE: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"

Hi,
>>> Cool, this works for me too. Send me a mail off-list once you are available 
>>> and we can figure out a way to get into a call and work on this.

Have you reproduced the issue per the step I listed in 
https://bugzilla.redhat.com/show_bug.cgi?id=1531457 and last mail?

If not, I would like you could try it yourself , which the difference between 
yours and mine is just create only 2 bricks instead of 6 bricks.

And Cynthia could have a session with you if you needed when I am not available 
in next Monday and Tuesday.

Thanks & Best Regards,
George

From: 
gluster-devel-boun...@gluster.org 
[mailto:gluster-devel-boun...@gluster.org] On Behalf Of Pranith Kumar Karampuri
Sent: Thursday, January 18, 2018 6:03 PM
To: Lian, George (NSB - CN/Hangzhou) 
>
Cc: Zhou, Cynthia (NSB - CN/Hangzhou) 
>; Li, Deqian 
(NSB - CN/Hangzhou) 
>; 
Gluster-devel@gluster.org; Sun, Ping (NSB - 
CN/Hangzhou) >
Subject: Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't 
let NFS cache stat after writes"



On Thu, Jan 18, 2018 at 12:17 PM, Lian, George (NSB - CN/Hangzhou) 
> wrote:
Hi,
>>>I actually tried it with replica-2 and replica-3 and then distributed 
>>>replica-2 before replying to the earlier mail. We can have a debugging 
>>>session if you are okay with it.

It is fine if you can’t reproduce the issue in 

Re: [Gluster-devel] Rawhide RPM builds failing

2018-01-24 Thread Niels de Vos
On Wed, Jan 24, 2018 at 02:53:40PM +0530, Nigel Babu wrote:
> More details: https://build.gluster.org/job/rpm-rawhide/1182/

With the changes for bug 1536186 it works fine for me. One patch still
needs to get merged though.

The error in the root.log of the job looks unrelated, it may have been
caused by a broken package in Fedora Rawhide. I could not identify a
real bug in the .spec.

Niels


> On Wed, Jan 24, 2018 at 2:03 PM, Niels de Vos  wrote:
> 
> > On Wed, Jan 24, 2018 at 09:14:51AM +0530, Nigel Babu wrote:
> > > Hello folks,
> > >
> > > Our rawhide rpm builds seem to be failing with what looks like a specfile
> > > issue. It's worth looking into this now before F28 is released in May.
> >
> > Do you have more details? The errors from a build.log from mock would
> > help. Which .spec are you using, the one from the GlusterFS sources, or
> > the one from Fedora?
> >
> > Please report it as a bug, either against Fedora/glusterfs or
> > GlusterFS/build.
> >
> > Thanks!
> > Niels
> >
> 
> 
> 
> -- 
> nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rawhide RPM builds failing

2018-01-24 Thread Nigel Babu
More details: https://build.gluster.org/job/rpm-rawhide/1182/

On Wed, Jan 24, 2018 at 2:03 PM, Niels de Vos  wrote:

> On Wed, Jan 24, 2018 at 09:14:51AM +0530, Nigel Babu wrote:
> > Hello folks,
> >
> > Our rawhide rpm builds seem to be failing with what looks like a specfile
> > issue. It's worth looking into this now before F28 is released in May.
>
> Do you have more details? The errors from a build.log from mock would
> help. Which .spec are you using, the one from the GlusterFS sources, or
> the one from Fedora?
>
> Please report it as a bug, either against Fedora/glusterfs or
> GlusterFS/build.
>
> Thanks!
> Niels
>



-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
On Wed, Jan 24, 2018 at 2:24 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> hi,
>In the same commit you mentioned earlier, there was this code
> earlier:
> -/* Returns 1 if the stat seems to be filled with zeroes. */
> -int
> -nfs_zero_filled_stat (struct iatt *buf)
> -{
> -if (!buf)
> -return 1;
> -
> -/* Do not use st_dev because it is transformed to store the
> xlator id
> - * in place of the device number. Do not use st_ino because by
> this time
> - * we've already mapped the root ino to 1 so it is not guaranteed
> to be
> - * 0.
> - */
> -if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
> -return 1;
> -
> -return 0;
> -}
> -
> -
>
> I moved this to a common library function that can be used in afr as well.
> Why was it there in NFS? +Niels for answering that question.
>
> If I give you a patch which will assert the error condition, would it be
> possible for you to figure out the first xlator which is unwinding the iatt
> with nlink count as zero but ctime as non-zero?
>

Hey,
  I went through the code, I think gf_fuse_stat2attr() function could
be sending both ctime and nlink count as zero to the kernel like you were
mentioning. So I guess we should wait for Niels' answer about the need for
marking nlink count as zero. We may need to patch fuse code differently and
mark entry, attr valid seconds/nseconds to zero, so that we get a lookup on
the entry.


> On Wed, Jan 24, 2018 at 1:03 PM, Lian, George (NSB - CN/Hangzhou) <
> george.l...@nokia-sbell.com> wrote:
>
>> Hi,  Pranith Kumar,
>>
>>
>>
>> Can you tell me while need set buf->ia_nlink to “0”in function
>> gf_zero_fill_stat(), which API or Application will care it?
>>
>> If I remove this line and also update corresponding in function
>> gf_is_zero_filled_stat,
>>
>> The issue seems gone, but I can’t confirm will lead to other issues.
>>
>>
>>
>> So could you please double check it and give your comments?
>>
>>
>>
>> My change is as the below:
>>
>>
>>
>> gf_boolean_t
>>
>> gf_is_zero_filled_stat (struct iatt *buf)
>>
>> {
>>
>> if (!buf)
>>
>> return 1;
>>
>>
>>
>> /* Do not use st_dev because it is transformed to store the
>> xlator id
>>
>>  * in place of the device number. Do not use st_ino because by
>> this time
>>
>>  * we've already mapped the root ino to 1 so it is not guaranteed
>> to be
>>
>>  * 0.
>>
>>  */
>>
>> //if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
>>
>> if (buf->ia_ctime == )
>>
>> return 1;
>>
>>
>>
>> return 0;
>>
>> }
>>
>>
>>
>> void
>>
>> gf_zero_fill_stat (struct iatt *buf)
>>
>> {
>>
>> //   buf->ia_nlink = 0;
>>
>> buf->ia_ctime = 0;
>>
>> }
>>
>>
>>
>> Thanks & Best Regards
>>
>> George
>>
>> *From:* Lian, George (NSB - CN/Hangzhou)
>> *Sent:* Friday, January 19, 2018 10:03 AM
>> *To:* Pranith Kumar Karampuri ; Zhou, Cynthia (NSB
>> - CN/Hangzhou) 
>> *Cc:* Li, Deqian (NSB - CN/Hangzhou) ;
>> Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
>> ping@nokia-sbell.com>
>>
>> *Subject:* RE: [Gluster-devel] a link issue maybe introduced in a bug
>> fix " Don't let NFS cache stat after writes"
>>
>>
>>
>> Hi,
>>
>> >>> Cool, this works for me too. Send me a mail off-list once you are
>> available and we can figure out a way to get into a call and work on this.
>>
>>
>>
>> Have you reproduced the issue per the step I listed in
>> https://bugzilla.redhat.com/show_bug.cgi?id=1531457 and last mail?
>>
>>
>>
>> If not, I would like you could try it yourself , which the difference
>> between yours and mine is just create only 2 bricks instead of 6 bricks.
>>
>>
>>
>> And Cynthia could have a session with you if you needed when I am not
>> available in next Monday and Tuesday.
>>
>>
>>
>> Thanks & Best Regards,
>>
>> George
>>
>>
>>
>> *From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
>> gluster.org ] *On Behalf Of *Pranith
>> Kumar Karampuri
>> *Sent:* Thursday, January 18, 2018 6:03 PM
>> *To:* Lian, George (NSB - CN/Hangzhou) 
>> *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) ;
>> Li, Deqian (NSB - CN/Hangzhou) ;
>> Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
>> ping@nokia-sbell.com>
>> *Subject:* Re: [Gluster-devel] a link issue maybe introduced in a bug
>> fix " Don't let NFS cache stat after writes"
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Jan 18, 2018 at 12:17 PM, Lian, George (NSB - CN/Hangzhou) <
>> george.l...@nokia-sbell.com> wrote:
>>
>> Hi,
>>
>> >>>I actually tried it with replica-2 and replica-3 and then distributed
>> replica-2 before replying to the earlier mail. We can have a debugging
>> session if you are okay with it.
>>
>>
>>
>> It 

Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
hi,
   In the same commit you mentioned earlier, there was this code
earlier:
-/* Returns 1 if the stat seems to be filled with zeroes. */
-int
-nfs_zero_filled_stat (struct iatt *buf)
-{
-if (!buf)
-return 1;
-
-/* Do not use st_dev because it is transformed to store the xlator
id
- * in place of the device number. Do not use st_ino because by
this time
- * we've already mapped the root ino to 1 so it is not guaranteed
to be
- * 0.
- */
-if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
-return 1;
-
-return 0;
-}
-
-

I moved this to a common library function that can be used in afr as well.
Why was it there in NFS? +Niels for answering that question.

If I give you a patch which will assert the error condition, would it be
possible for you to figure out the first xlator which is unwinding the iatt
with nlink count as zero but ctime as non-zero?

On Wed, Jan 24, 2018 at 1:03 PM, Lian, George (NSB - CN/Hangzhou) <
george.l...@nokia-sbell.com> wrote:

> Hi,  Pranith Kumar,
>
>
>
> Can you tell me while need set buf->ia_nlink to “0”in function
> gf_zero_fill_stat(), which API or Application will care it?
>
> If I remove this line and also update corresponding in function
> gf_is_zero_filled_stat,
>
> The issue seems gone, but I can’t confirm will lead to other issues.
>
>
>
> So could you please double check it and give your comments?
>
>
>
> My change is as the below:
>
>
>
> gf_boolean_t
>
> gf_is_zero_filled_stat (struct iatt *buf)
>
> {
>
> if (!buf)
>
> return 1;
>
>
>
> /* Do not use st_dev because it is transformed to store the xlator
> id
>
>  * in place of the device number. Do not use st_ino because by
> this time
>
>  * we've already mapped the root ino to 1 so it is not guaranteed
> to be
>
>  * 0.
>
>  */
>
> //if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))
>
> if (buf->ia_ctime == )
>
> return 1;
>
>
>
> return 0;
>
> }
>
>
>
> void
>
> gf_zero_fill_stat (struct iatt *buf)
>
> {
>
> //   buf->ia_nlink = 0;
>
> buf->ia_ctime = 0;
>
> }
>
>
>
> Thanks & Best Regards
>
> George
>
> *From:* Lian, George (NSB - CN/Hangzhou)
> *Sent:* Friday, January 19, 2018 10:03 AM
> *To:* Pranith Kumar Karampuri ; Zhou, Cynthia (NSB -
> CN/Hangzhou) 
> *Cc:* Li, Deqian (NSB - CN/Hangzhou) ;
> Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
> ping@nokia-sbell.com>
>
> *Subject:* RE: [Gluster-devel] a link issue maybe introduced in a bug fix
> " Don't let NFS cache stat after writes"
>
>
>
> Hi,
>
> >>> Cool, this works for me too. Send me a mail off-list once you are
> available and we can figure out a way to get into a call and work on this.
>
>
>
> Have you reproduced the issue per the step I listed in
> https://bugzilla.redhat.com/show_bug.cgi?id=1531457 and last mail?
>
>
>
> If not, I would like you could try it yourself , which the difference
> between yours and mine is just create only 2 bricks instead of 6 bricks.
>
>
>
> And Cynthia could have a session with you if you needed when I am not
> available in next Monday and Tuesday.
>
>
>
> Thanks & Best Regards,
>
> George
>
>
>
> *From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
> gluster.org ] *On Behalf Of *Pranith
> Kumar Karampuri
> *Sent:* Thursday, January 18, 2018 6:03 PM
> *To:* Lian, George (NSB - CN/Hangzhou) 
> *Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) ;
> Li, Deqian (NSB - CN/Hangzhou) ;
> Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
> ping@nokia-sbell.com>
> *Subject:* Re: [Gluster-devel] a link issue maybe introduced in a bug fix
> " Don't let NFS cache stat after writes"
>
>
>
>
>
>
>
> On Thu, Jan 18, 2018 at 12:17 PM, Lian, George (NSB - CN/Hangzhou) <
> george.l...@nokia-sbell.com> wrote:
>
> Hi,
>
> >>>I actually tried it with replica-2 and replica-3 and then distributed
> replica-2 before replying to the earlier mail. We can have a debugging
> session if you are okay with it.
>
>
>
> It is fine if you can’t reproduce the issue in your ENV.
>
> And I has attached the detail reproduce log in the Bugzilla FYI
>
>
>
> But I am sorry I maybe OOO at Monday and Tuesday next week, so debug
> session will be fine to me at next Wednesday.
>
>
>
> Cool, this works for me too. Send me a mail off-list once you are
> available and we can figure out a way to get into a call and work on this.
>
>
>
>
>
>
>
> Paste the detail reproduce log FYI here:
>
> *root@ubuntu:~# gluster peer probe ubuntu*
>
> *peer probe: success. Probe on localhost not needed*
>
> *root@ubuntu:~# gluster v create test replica 2 ubuntu:/home/gfs/b1
> ubuntu:/home/gfs/b2 force*
>
> *volume create: test: success: 

Re: [Gluster-devel] Rawhide RPM builds failing

2018-01-24 Thread Niels de Vos
On Wed, Jan 24, 2018 at 09:14:51AM +0530, Nigel Babu wrote:
> Hello folks,
> 
> Our rawhide rpm builds seem to be failing with what looks like a specfile
> issue. It's worth looking into this now before F28 is released in May.

Do you have more details? The errors from a build.log from mock would
help. Which .spec are you using, the one from the GlusterFS sources, or
the one from Fedora?

Please report it as a bug, either against Fedora/glusterfs or
GlusterFS/build.

Thanks!
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] a link issue maybe introduced in a bug fix " Don't let NFS cache stat after writes"

2018-01-24 Thread Pranith Kumar Karampuri
If ctime is zero, no xlator should consider it as a good iatt. The fact
that this is happening means some xlator is not doing proper checks in
code. We need to find what that xlator is and fix it. Internet in our new
office is not working so I'm not able to have call today with you guys.
What I would do is to put logs in lookup, link, fstat, stat calls to see if
anyone unwound iatt with ia_nlink count as zero but ctime as nonzero.

On 24 Jan 2018 1:03 pm, "Lian, George (NSB - CN/Hangzhou)" <
george.l...@nokia-sbell.com> wrote:

Hi,  Pranith Kumar,



Can you tell me while need set buf->ia_nlink to “0”in function
gf_zero_fill_stat(), which API or Application will care it?

If I remove this line and also update corresponding in function
gf_is_zero_filled_stat,

The issue seems gone, but I can’t confirm will lead to other issues.



So could you please double check it and give your comments?



My change is as the below:



gf_boolean_t

gf_is_zero_filled_stat (struct iatt *buf)

{

if (!buf)

return 1;



/* Do not use st_dev because it is transformed to store the xlator
id

 * in place of the device number. Do not use st_ino because by this
time

 * we've already mapped the root ino to 1 so it is not guaranteed
to be

 * 0.

 */

//if ((buf->ia_nlink == 0) && (buf->ia_ctime == 0))

if (buf->ia_ctime == )

return 1;



return 0;

}



void

gf_zero_fill_stat (struct iatt *buf)

{

//   buf->ia_nlink = 0;

buf->ia_ctime = 0;

}



Thanks & Best Regards

George

*From:* Lian, George (NSB - CN/Hangzhou)
*Sent:* Friday, January 19, 2018 10:03 AM
*To:* Pranith Kumar Karampuri ; Zhou, Cynthia (NSB -
CN/Hangzhou) 
*Cc:* Li, Deqian (NSB - CN/Hangzhou) ;
Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
ping@nokia-sbell.com>

*Subject:* RE: [Gluster-devel] a link issue maybe introduced in a bug fix "
Don't let NFS cache stat after writes"



Hi,

>>> Cool, this works for me too. Send me a mail off-list once you are
available and we can figure out a way to get into a call and work on this.



Have you reproduced the issue per the step I listed in
https://bugzilla.redhat.com/show_bug.cgi?id=1531457 and last mail?



If not, I would like you could try it yourself , which the difference
between yours and mine is just create only 2 bricks instead of 6 bricks.



And Cynthia could have a session with you if you needed when I am not
available in next Monday and Tuesday.



Thanks & Best Regards,

George



*From:* gluster-devel-boun...@gluster.org [mailto:gluster-devel-bounces@
gluster.org ] *On Behalf Of *Pranith
Kumar Karampuri
*Sent:* Thursday, January 18, 2018 6:03 PM
*To:* Lian, George (NSB - CN/Hangzhou) 
*Cc:* Zhou, Cynthia (NSB - CN/Hangzhou) ; Li,
Deqian (NSB - CN/Hangzhou) ;
Gluster-devel@gluster.org; Sun, Ping (NSB - CN/Hangzhou) <
ping@nokia-sbell.com>
*Subject:* Re: [Gluster-devel] a link issue maybe introduced in a bug fix "
Don't let NFS cache stat after writes"







On Thu, Jan 18, 2018 at 12:17 PM, Lian, George (NSB - CN/Hangzhou) <
george.l...@nokia-sbell.com> wrote:

Hi,

>>>I actually tried it with replica-2 and replica-3 and then distributed
replica-2 before replying to the earlier mail. We can have a debugging
session if you are okay with it.



It is fine if you can’t reproduce the issue in your ENV.

And I has attached the detail reproduce log in the Bugzilla FYI



But I am sorry I maybe OOO at Monday and Tuesday next week, so debug
session will be fine to me at next Wednesday.



Cool, this works for me too. Send me a mail off-list once you are available
and we can figure out a way to get into a call and work on this.







Paste the detail reproduce log FYI here:

*root@ubuntu:~# gluster peer probe ubuntu*

*peer probe: success. Probe on localhost not needed*

*root@ubuntu:~# gluster v create test replica 2 ubuntu:/home/gfs/b1
ubuntu:/home/gfs/b2 force*

*volume create: test: success: please start the volume to access data*

*root@ubuntu:~# gluster v start test*

*volume start: test: success*

*root@ubuntu:~# gluster v info test*



*Volume Name: test*

*Type: Replicate*

*Volume ID: fef5fca3-81d9-46d3-8847-74cde6f701a5*

*Status: Started*

*Snapshot Count: 0*

*Number of Bricks: 1 x 2 = 2*

*Transport-type: tcp*

*Bricks:*

*Brick1: ubuntu:/home/gfs/b1*

*Brick2: ubuntu:/home/gfs/b2*

*Options Reconfigured:*

*transport.address-family: inet*

*nfs.disable: on*

*performance.client-io-threads: off*

*root@ubuntu:~# gluster v status*

*Status of volume: test*

*Gluster process TCP Port  RDMA Port  Online
Pid*

*--*

*Brick ubuntu:/home/gfs/b1   49152