Re: [Nfs-ganesha-devel] drc refcnt

2017-05-03 Thread Malahal Naineni
Matt, you are correct. We lose some memory (drc and dupreqs) for a client
that never reconnects. Doing solely time based strategy is not scalable as
well unless we fork multiple threads for doing this. My understanding is
that there will be one time based strategy (hopefully, the time is long
enough that it does not interfere with current strategy) in __addition__ to
the current retiring strategy.

Regards, Malahal.

On Thu, May 4, 2017 at 3:56 AM, Matt Benjamin  wrote:

> Hi Guys,
>
> To get on the record here, the current retire strategy using new requests
> to retire old ones is an intrinsic good, particularly with TCP and related
> cots-ord transports where requests are totally ordered.  I don't think
> moving to a strictly time-based strategy is preferable.  Apparently the
> actually observed or theorized issue has to do with not disposing of
> requests in invalidated DRCs?  That seems to be a special case, no?
>
> Matt
>
> - Original Message -
> > From: "Malahal Naineni" 
> > To: "Satya Prakash GS" 
> > Cc: "Matt Benjamin" , nfs-ganesha-devel@lists.
> sourceforge.net
> > Sent: Tuesday, May 2, 2017 2:21:48 AM
> > Subject: Re: [Nfs-ganesha-devel] drc refcnt
> >
> > Sorry, every cacheable request holds a ref on its DRC as well as its
> > DUPREQ. The ref on DUPREQ should be released when the request goes away
> > (via nfs_dupreq_rele). The ref on DRC will be released when the
> > corresponding DUPREQ request gets released. Since we release DUPREQs
> while
> > processing other requests, you are right that the DRC won't be freed if
> > there are no more requests that would use the same DRC.
> >
> > I think we should be freeing dupreq periodically using a timed function,
> > something like that drc_free_expired.
> >
> > Regards, Malahal.
> >
> >
> >
> > On Tue, May 2, 2017 at 10:38 AM, Satya Prakash GS <
> g.satyaprak...@gmail.com>
> > wrote:
> >
> > > > On Tue, May 2, 2017 at 7:58 AM, Malahal Naineni 
> > > wrote:
> > > > A dupreq will place a refcount on its DRC when it calls xxx_get_drc,
> so
> > > we
> > > > will release that DRC refcount when we free the dupreq.
> > >
> > > Ok, so every dupreq holds a ref on the drc. In case of drc cache hit,
> > > a dupreq entry can ref the
> > > drc more than once. This is still fine because unless the dupreq entry
> > > ref goes to zero the drc isn't freed.
> > >
> > > > nfs_dupreq_finish() shouldn't free its own dupreq. When it does free
> some
> > > > other dupreq, we will release DRC refcount corresponding to that
> dupreq.
> > >
> > > > When we free all dupreqs that belong to a DRC
> > >
> > > In the case of a disconnected client when are all the dupreqs freed ?
> > >
> > > When all the filesystem operations subside from a client (mount point
> > > is no longer in use),
> > > nfs_dupreq_finish doesn't get called anymore. This is the only place
> > > where dupreq entries are removed from
> > > the drc. If the entries aren't removed from drc, drc refcnt doesn't go
> to
> > > 0.
> > >
> > > >, its refcount should go to
> > > > zero (maybe another ref is held by the socket itself, so the socket
> has
> > > to
> > > > be closed as well).
> > > >
> > > >
> > > > In fact, if we release DRC refcount without freeing the dupreq, that
> > > would
> > > > be a bug!
> > > >
> > > > Regards, Malahal.
> > > >
> > > Thanks,
> > > Satya.
> > >
> >
>
> --
> Matt Benjamin
> Red Hat, Inc.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
>
> http://www.redhat.com/en/technologies/storage
>
> tel.  734-821-5101
> fax.  734-769-8938
> cel.  734-216-5309
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Use saved_export when cleaning up saved_object (SavedFH).

2017-05-03 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/359513


Change subject: Use saved_export when cleaning up saved_object (SavedFH).
..

Use saved_export when cleaning up saved_object (SavedFH).

Also simplify compound cleanup to not do double cleanup work.

Change-Id: Ieab3edf6c82a17487cf2b9d9ea586cce3a28d870
Signed-off-by: Frank S. Filz 
---
M src/Protocols/NFS/nfs4_Compound.c
M src/include/nfs_proto_data.h
2 files changed, 27 insertions(+), 20 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/13/359513/1
-- 
To view, visit https://review.gerrithub.io/359513
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ieab3edf6c82a17487cf2b9d9ea586cce3a28d870
Gerrit-Change-Number: 359513
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] drc refcnt

2017-05-03 Thread Matt Benjamin
Hi Guys,

To get on the record here, the current retire strategy using new requests to 
retire old ones is an intrinsic good, particularly with TCP and related 
cots-ord transports where requests are totally ordered.  I don't think moving 
to a strictly time-based strategy is preferable.  Apparently the actually 
observed or theorized issue has to do with not disposing of requests in 
invalidated DRCs?  That seems to be a special case, no?

Matt

- Original Message -
> From: "Malahal Naineni" 
> To: "Satya Prakash GS" 
> Cc: "Matt Benjamin" , 
> nfs-ganesha-devel@lists.sourceforge.net
> Sent: Tuesday, May 2, 2017 2:21:48 AM
> Subject: Re: [Nfs-ganesha-devel] drc refcnt
> 
> Sorry, every cacheable request holds a ref on its DRC as well as its
> DUPREQ. The ref on DUPREQ should be released when the request goes away
> (via nfs_dupreq_rele). The ref on DRC will be released when the
> corresponding DUPREQ request gets released. Since we release DUPREQs while
> processing other requests, you are right that the DRC won't be freed if
> there are no more requests that would use the same DRC.
> 
> I think we should be freeing dupreq periodically using a timed function,
> something like that drc_free_expired.
> 
> Regards, Malahal.
> 
> 
> 
> On Tue, May 2, 2017 at 10:38 AM, Satya Prakash GS 
> wrote:
> 
> > > On Tue, May 2, 2017 at 7:58 AM, Malahal Naineni 
> > wrote:
> > > A dupreq will place a refcount on its DRC when it calls xxx_get_drc, so
> > we
> > > will release that DRC refcount when we free the dupreq.
> >
> > Ok, so every dupreq holds a ref on the drc. In case of drc cache hit,
> > a dupreq entry can ref the
> > drc more than once. This is still fine because unless the dupreq entry
> > ref goes to zero the drc isn't freed.
> >
> > > nfs_dupreq_finish() shouldn't free its own dupreq. When it does free some
> > > other dupreq, we will release DRC refcount corresponding to that dupreq.
> >
> > > When we free all dupreqs that belong to a DRC
> >
> > In the case of a disconnected client when are all the dupreqs freed ?
> >
> > When all the filesystem operations subside from a client (mount point
> > is no longer in use),
> > nfs_dupreq_finish doesn't get called anymore. This is the only place
> > where dupreq entries are removed from
> > the drc. If the entries aren't removed from drc, drc refcnt doesn't go to
> > 0.
> >
> > >, its refcount should go to
> > > zero (maybe another ref is held by the socket itself, so the socket has
> > to
> > > be closed as well).
> > >
> > >
> > > In fact, if we release DRC refcount without freeing the dupreq, that
> > would
> > > be a bug!
> > >
> > > Regards, Malahal.
> > >
> > Thanks,
> > Satya.
> >
> 

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Fix infinite loop in avl_dirent_set_deleted

2017-05-03 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/359509


Change subject: Fix infinite loop in avl_dirent_set_deleted
..

Fix infinite loop in avl_dirent_set_deleted

Instead of constantly getting the next dirent after the one
being marked as deleted, get the actual next dirent...

Change-Id: I85cf471fe1d7f229d92445c15d68f1ffd1ac81aa
Signed-off-by: Frank S. Filz 
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_avl.c
1 file changed, 1 insertion(+), 1 deletion(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/09/359509/1
-- 
To view, visit https://review.gerrithub.io/359509
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I85cf471fe1d7f229d92445c15d68f1ffd1ac81aa
Gerrit-Change-Number: 359509
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] reg. drc nested locks

2017-05-03 Thread Matt Benjamin
I don't see a complete design proposal here.  Can you clarify the fuller 
picture of what you're proposing to do?

Matt

- Original Message -
> From: "Satya Prakash GS" 
> To: "Matt Benjamin" , "Malahal Naineni" 
> ,
> nfs-ganesha-devel@lists.sourceforge.net
> Sent: Wednesday, May 3, 2017 4:14:21 PM
> Subject: Re: [Nfs-ganesha-devel] reg. drc nested locks
> 
> Thank you for the quick reply.
> 
> In dupreq_finish, as part of retiring the drc quite a few locks are
> acquired and dropped (per entry). I want to fix a bug where drc retire
> will happen as part of a different function (this will be called from
> free_expired). The existing logic gets carried over to the new
> function and I was thinking that we may not have to acquire and
> release lock so many times.
> 
> Thanks,
> Satya.
> 
> On Thu, May 4, 2017 at 1:21 AM, Matt Benjamin  wrote:
> > Hi Satya,
> >
> > Sorry, my recommendation would be, we do not change locking to be more
> > coarse grained, and in general, should update it in response to an
> > indication that it is incorrect, not to improve readability in the first
> > instance.
> >
> > Regards,
> >
> > Matt
> >
> > - Original Message -
> >> From: "Matt Benjamin" 
> >> To: "Satya Prakash GS" 
> >> Cc: nfs-ganesha-devel@lists.sourceforge.net, "Malahal Naineni"
> >> 
> >> Sent: Wednesday, May 3, 2017 3:43:06 PM
> >> Subject: Re: [Nfs-ganesha-devel] reg. drc nested locks
> >>
> >> No?
> >>
> >> Matt
> >>
> >> - Original Message -
> >> > From: "Satya Prakash GS" 
> >> > To: nfs-ganesha-devel@lists.sourceforge.net, "Malahal Naineni"
> >> > 
> >> > Sent: Wednesday, May 3, 2017 3:34:31 PM
> >> > Subject: [Nfs-ganesha-devel] reg. drc nested locks
> >> >
> >> > Hi,
> >> >
> >> > In nfs_dupreq_start and nfs_dupreq_finish when allocating/freeing a
> >> > dupreq_entry we are trying hard to keep both dupreq_q and the rbtree
> >> > in sync acquiring both the partition lock and the drc (t->mtx,
> >> > drc->mtx). This requires dropping and reacquiring locks at certain
> >> > places. Can these nested locks be changed to take locks one after the
> >> > other.
> >> >
> >> > For example at the time of allocation, we could choose to do this -
> >> >
> >> > PTHREAD_MUTEX_lock(>mtx); /* partition lock */
> >> > nv = rbtree_x_cached_lookup(>xt, t, >rbt_k, dk->hk);
> >> > if (!nv) {
> >> > dk->refcnt = 2;
> >> > (void)rbtree_x_cached_insert(>xt, t,
> >> > >rbt_k, dk->hk);
> >> > PTHREAD_MUTEX_unlock(>mtx); /* partition lock */
> >> >
> >> > PTHREAD_MUTEX_lock(>mtx);
> >> > TAILQ_INSERT_TAIL(>dupreq_q, dk, fifo_q);
> >> > ++(drc->size);
> >> > PTHREAD_MUTEX_unlock(>mtx);
> >> > }
> >> >
> >> > I am assuming this would simplify the lock code a lot.
> >> > If there is a case where this would introduce a race please let me know.
> >> >
> >> > Thanks,
> >> > Satya.
> >> >
> >> > --
> >> > Check out the vibrant tech community on one of the world's most
> >> > engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> >> > ___
> >> > Nfs-ganesha-devel mailing list
> >> > Nfs-ganesha-devel@lists.sourceforge.net
> >> > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> >> >
> >>
> >> --
> >> Matt Benjamin
> >> Red Hat, Inc.
> >> 315 West Huron Street, Suite 140A
> >> Ann Arbor, Michigan 48103
> >>
> >> http://www.redhat.com/en/technologies/storage
> >>
> >> tel.  734-821-5101
> >> fax.  734-769-8938
> >> cel.  734-216-5309
> >>
> >
> > --
> > Matt Benjamin
> > Red Hat, Inc.
> > 315 West Huron Street, Suite 140A
> > Ann Arbor, Michigan 48103
> >
> > http://www.redhat.com/en/technologies/storage
> >
> > tel.  734-821-5101
> > fax.  734-769-8938
> > cel.  734-216-5309
> 

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] reg. drc nested locks

2017-05-03 Thread Satya Prakash GS
Thank you for the quick reply.

In dupreq_finish, as part of retiring the drc quite a few locks are
acquired and dropped (per entry). I want to fix a bug where drc retire
will happen as part of a different function (this will be called from
free_expired). The existing logic gets carried over to the new
function and I was thinking that we may not have to acquire and
release lock so many times.

Thanks,
Satya.

On Thu, May 4, 2017 at 1:21 AM, Matt Benjamin  wrote:
> Hi Satya,
>
> Sorry, my recommendation would be, we do not change locking to be more coarse 
> grained, and in general, should update it in response to an indication that 
> it is incorrect, not to improve readability in the first instance.
>
> Regards,
>
> Matt
>
> - Original Message -
>> From: "Matt Benjamin" 
>> To: "Satya Prakash GS" 
>> Cc: nfs-ganesha-devel@lists.sourceforge.net, "Malahal Naineni" 
>> 
>> Sent: Wednesday, May 3, 2017 3:43:06 PM
>> Subject: Re: [Nfs-ganesha-devel] reg. drc nested locks
>>
>> No?
>>
>> Matt
>>
>> - Original Message -
>> > From: "Satya Prakash GS" 
>> > To: nfs-ganesha-devel@lists.sourceforge.net, "Malahal Naineni"
>> > 
>> > Sent: Wednesday, May 3, 2017 3:34:31 PM
>> > Subject: [Nfs-ganesha-devel] reg. drc nested locks
>> >
>> > Hi,
>> >
>> > In nfs_dupreq_start and nfs_dupreq_finish when allocating/freeing a
>> > dupreq_entry we are trying hard to keep both dupreq_q and the rbtree
>> > in sync acquiring both the partition lock and the drc (t->mtx,
>> > drc->mtx). This requires dropping and reacquiring locks at certain
>> > places. Can these nested locks be changed to take locks one after the
>> > other.
>> >
>> > For example at the time of allocation, we could choose to do this -
>> >
>> > PTHREAD_MUTEX_lock(>mtx); /* partition lock */
>> > nv = rbtree_x_cached_lookup(>xt, t, >rbt_k, dk->hk);
>> > if (!nv) {
>> > dk->refcnt = 2;
>> > (void)rbtree_x_cached_insert(>xt, t,
>> > >rbt_k, dk->hk);
>> > PTHREAD_MUTEX_unlock(>mtx); /* partition lock */
>> >
>> > PTHREAD_MUTEX_lock(>mtx);
>> > TAILQ_INSERT_TAIL(>dupreq_q, dk, fifo_q);
>> > ++(drc->size);
>> > PTHREAD_MUTEX_unlock(>mtx);
>> > }
>> >
>> > I am assuming this would simplify the lock code a lot.
>> > If there is a case where this would introduce a race please let me know.
>> >
>> > Thanks,
>> > Satya.
>> >
>> > --
>> > Check out the vibrant tech community on one of the world's most
>> > engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> > ___
>> > Nfs-ganesha-devel mailing list
>> > Nfs-ganesha-devel@lists.sourceforge.net
>> > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>> >
>>
>> --
>> Matt Benjamin
>> Red Hat, Inc.
>> 315 West Huron Street, Suite 140A
>> Ann Arbor, Michigan 48103
>>
>> http://www.redhat.com/en/technologies/storage
>>
>> tel.  734-821-5101
>> fax.  734-769-8938
>> cel.  734-216-5309
>>
>
> --
> Matt Benjamin
> Red Hat, Inc.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
>
> http://www.redhat.com/en/technologies/storage
>
> tel.  734-821-5101
> fax.  734-769-8938
> cel.  734-216-5309

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] reg. drc nested locks

2017-05-03 Thread Matt Benjamin
Hi Satya,

Sorry, my recommendation would be, we do not change locking to be more coarse 
grained, and in general, should update it in response to an indication that it 
is incorrect, not to improve readability in the first instance.

Regards,

Matt

- Original Message -
> From: "Matt Benjamin" 
> To: "Satya Prakash GS" 
> Cc: nfs-ganesha-devel@lists.sourceforge.net, "Malahal Naineni" 
> 
> Sent: Wednesday, May 3, 2017 3:43:06 PM
> Subject: Re: [Nfs-ganesha-devel] reg. drc nested locks
> 
> No?
> 
> Matt
> 
> - Original Message -
> > From: "Satya Prakash GS" 
> > To: nfs-ganesha-devel@lists.sourceforge.net, "Malahal Naineni"
> > 
> > Sent: Wednesday, May 3, 2017 3:34:31 PM
> > Subject: [Nfs-ganesha-devel] reg. drc nested locks
> > 
> > Hi,
> > 
> > In nfs_dupreq_start and nfs_dupreq_finish when allocating/freeing a
> > dupreq_entry we are trying hard to keep both dupreq_q and the rbtree
> > in sync acquiring both the partition lock and the drc (t->mtx,
> > drc->mtx). This requires dropping and reacquiring locks at certain
> > places. Can these nested locks be changed to take locks one after the
> > other.
> > 
> > For example at the time of allocation, we could choose to do this -
> > 
> > PTHREAD_MUTEX_lock(>mtx); /* partition lock */
> > nv = rbtree_x_cached_lookup(>xt, t, >rbt_k, dk->hk);
> > if (!nv) {
> > dk->refcnt = 2;
> > (void)rbtree_x_cached_insert(>xt, t,
> > >rbt_k, dk->hk);
> > PTHREAD_MUTEX_unlock(>mtx); /* partition lock */
> > 
> > PTHREAD_MUTEX_lock(>mtx);
> > TAILQ_INSERT_TAIL(>dupreq_q, dk, fifo_q);
> > ++(drc->size);
> > PTHREAD_MUTEX_unlock(>mtx);
> > }
> > 
> > I am assuming this would simplify the lock code a lot.
> > If there is a case where this would introduce a race please let me know.
> > 
> > Thanks,
> > Satya.
> > 
> > --
> > Check out the vibrant tech community on one of the world's most
> > engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> > ___
> > Nfs-ganesha-devel mailing list
> > Nfs-ganesha-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> > 
> 
> --
> Matt Benjamin
> Red Hat, Inc.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
> 
> http://www.redhat.com/en/technologies/storage
> 
> tel.  734-821-5101
> fax.  734-769-8938
> cel.  734-216-5309
> 

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] reg. drc nested locks

2017-05-03 Thread Matt Benjamin
No?

Matt

- Original Message -
> From: "Satya Prakash GS" 
> To: nfs-ganesha-devel@lists.sourceforge.net, "Malahal Naineni" 
> 
> Sent: Wednesday, May 3, 2017 3:34:31 PM
> Subject: [Nfs-ganesha-devel] reg. drc nested locks
> 
> Hi,
> 
> In nfs_dupreq_start and nfs_dupreq_finish when allocating/freeing a
> dupreq_entry we are trying hard to keep both dupreq_q and the rbtree
> in sync acquiring both the partition lock and the drc (t->mtx,
> drc->mtx). This requires dropping and reacquiring locks at certain
> places. Can these nested locks be changed to take locks one after the
> other.
> 
> For example at the time of allocation, we could choose to do this -
> 
> PTHREAD_MUTEX_lock(>mtx); /* partition lock */
> nv = rbtree_x_cached_lookup(>xt, t, >rbt_k, dk->hk);
> if (!nv) {
> dk->refcnt = 2;
> (void)rbtree_x_cached_insert(>xt, t,
> >rbt_k, dk->hk);
> PTHREAD_MUTEX_unlock(>mtx); /* partition lock */
> 
> PTHREAD_MUTEX_lock(>mtx);
> TAILQ_INSERT_TAIL(>dupreq_q, dk, fifo_q);
> ++(drc->size);
> PTHREAD_MUTEX_unlock(>mtx);
> }
> 
> I am assuming this would simplify the lock code a lot.
> If there is a case where this would introduce a race please let me know.
> 
> Thanks,
> Satya.
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
> 

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] reg. drc nested locks

2017-05-03 Thread Satya Prakash GS
Hi,

In nfs_dupreq_start and nfs_dupreq_finish when allocating/freeing a
dupreq_entry we are trying hard to keep both dupreq_q and the rbtree
in sync acquiring both the partition lock and the drc (t->mtx,
drc->mtx). This requires dropping and reacquiring locks at certain
places. Can these nested locks be changed to take locks one after the
other.

For example at the time of allocation, we could choose to do this -

PTHREAD_MUTEX_lock(>mtx); /* partition lock */
nv = rbtree_x_cached_lookup(>xt, t, >rbt_k, dk->hk);
if (!nv) {
dk->refcnt = 2;
(void)rbtree_x_cached_insert(>xt, t,
>rbt_k, dk->hk);
PTHREAD_MUTEX_unlock(>mtx); /* partition lock */

PTHREAD_MUTEX_lock(>mtx);
TAILQ_INSERT_TAIL(>dupreq_q, dk, fifo_q);
++(drc->size);
PTHREAD_MUTEX_unlock(>mtx);
}

I am assuming this would simplify the lock code a lot.
If there is a case where this would introduce a race please let me know.

Thanks,
Satya.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] I've got a new version of patches to help with unexport races

2017-05-03 Thread Frank Filz
If anyone was looking for this, I finally got it pushed...

Along the way I discovered that mdcache_new_entry is double deconstructing
the entry on failure...

> -Original Message-
> From: Frank Filz [mailto:ffilz...@mindspring.com]
> Sent: Tuesday, May 2, 2017 5:07 PM
> To: nfs-ganesha-devel@lists.sourceforge.net
> Subject: [Nfs-ganesha-devel] I've got a new version of patches to help
with
> unexport races
> 
> While working on this, I discovered some things that need to be done...
> 
> NFS v4 rename and link do not check for xdev...
> 
> I'm not sure that when the NFS v4 SavedFH is cleaned up (either being
> replaced, or released at end of compound) that it is cleaned up with the
> correct op_ctx, will check that and fix if not.
> 
> 9P doesn't check for xdev in rename or link.
> 
> I need to check that 9P sets a proper op_ctx when releasing object
> references (aka LRU references when MDCACHE is involved...).
> 
> Rename isn't prevented from overwriting a junction. This would require
> fsal_rename to do a lookup on the target name (at which point we might as
> well pass the target object if any down to the FSAL for mdcache
> consumption...). If that is protected from overwriting a junction, then
there
> is no risk of having the wrong op_ctx for cleanup.
> 
> Hmm, I also need to make sure that NFS v3 and 9P junction crossing is done
> in a way that is unexport safe...
> 
> This stuff probably didn't cause problems before MDCACHE because cache
> inode entry cleanup wasn't actually dependent on the proper export in
> op_ctx...
> Either that or no one did enough testing...
> 
> Frank
> 
> 
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
> 
> 
>

--
> Check out the vibrant tech community on one of the world's most engaging
> tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Don't double deconstruct new entry in mdcache_new_entry

2017-05-03 Thread GerritHub
>From Frank Filz :

Frank Filz has uploaded this change for review. ( 
https://review.gerrithub.io/359354


Change subject: Don't double deconstruct new entry in mdcache_new_entry
..

Don't double deconstruct new entry in mdcache_new_entry

The combination of mdcache_put and mdcache_kill_entry will deconstruct
the new entry, so we actually were double deconstructing and causing
problems.

Also clarify the goto labels, and since the old "out" now
"out_release_new_entry" is only called after nentry has been allocated
there is no need to check for nentry being NULL (that may have been a
holdover from before we aborted if memory allocation failed).

Change-Id: Idf1e829d8d5c7258e239824ca91df2e84810e2bb
Signed-off-by: Frank S. Filz 
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c
3 files changed, 20 insertions(+), 31 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/54/359354/1
-- 
To view, visit https://review.gerrithub.io/359354
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Idf1e829d8d5c7258e239824ca91df2e84810e2bb
Gerrit-Change-Number: 359354
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel