[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_PROXY : fix memory leak in pxy_readdir

2018-03-09 Thread GerritHub
>From Patrice LUCAS :

Patrice LUCAS has uploaded this change for review. ( 
https://review.gerrithub.io/403298


Change subject: FSAL_PROXY : fix memory leak in pxy_readdir
..

FSAL_PROXY : fix memory leak in pxy_readdir

Change-Id: I1d6a2edb0e49521b4a265ebc4bd026e30d382be7
Signed-off-by: Patrice LUCAS 
---
M src/FSAL/FSAL_PROXY/handle.c
1 file changed, 3 insertions(+), 1 deletion(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/98/403298/1
-- 
To view, visit https://review.gerrithub.io/403298
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I1d6a2edb0e49521b4a265ebc4bd026e30d382be7
Gerrit-Change-Number: 403298
Gerrit-PatchSet: 1
Gerrit-Owner: Patrice LUCAS 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] review request https://review.gerrithub.io/#/c/390652/

2018-03-09 Thread Satya Prakash GS
Can somebody please review this change : https://review.gerrithub.io/#/c/390652/

It addresses this issue :

Leak in DRC when client disconnects nfs_dupreq_finish doesn't call
put_drc always. It does only if it meets certain criteria
(drc_should_retire). This can leak the drc and the dupreq entries
within it when the client disconnects. More information can be found
here : https://sourceforge.net/p/nfs-ganesha/mailman/message/35815930/



Main idea behind the change.

Introduced a new drc queue which holds all the active drc objects
(tcp_drc_q in drc_st).
Every new drc is added to tcp_drc_q initially. Eventually it is moved
to tcp_drc_recycle_q. Drcs are freed from tcp_drc_recycle_q. Every drc
is either in the active drc queue or in the recycle queue.

DRC Refcount and transition from active drc to recycle queue :

Drc refcnt is initialized to 2. In dupreq_start, increment the drc
refcount. In dupreq_rele, decrement the drc refcnt. Drc refcnt is also
decremented in nfs_rpc_free_user_data. When drc refcnt goes to 0 and
drc is found not in use for 10 minutes, pick it up and free the
entries in iterations of 32 items at at time. Once the dupreq entries
goes to 0, remove the drc from tcp_drc_q and add it to
tcp_drc_recycle_q. Today, entries added to tcp_drc_recycle_q are
cleaned up periodically. Same logic should clean up these entries too.

Thanks,
Satya.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_CEPH: fix libcephfs result handling in read2/write2 ops

2018-03-09 Thread GerritHub
>From Jeff Layton :

Jeff Layton has uploaded this change for review. ( 
https://review.gerrithub.io/403320


Change subject: FSAL_CEPH: fix libcephfs result handling in read2/write2 ops
..

FSAL_CEPH: fix libcephfs result handling in read2/write2 ops

The recent code changes to add vectored I/O to FSAL_CEPH doesn't handle
the result correctly. First, it breaks out of the loop before fixing up
the lengths if the offset is 0. That means that reading from the
beginning of the file automatically returns a bunch of NULLs and writes
to the beginning of the file fail.

Also, it seems to place some significance on the offset being -1. I'm
unclear on why we'd treat that as a special case. Either it's already
-1ULL when we enter the function, or we will have just happened to have
hit that magic offset after iterating over a particular iovec. Let's
just remove that check and use a zero-length but successful read or
write as an indicator to stop looping.

Finally, treat a successful but zero-length read as an EOF condition,
instead of requiring that the client issue another read to discover
that.

Change-Id: I628f0a694f1a0b26e9d76203a59e4493c0a8e25e
Reported-by: Patrick Donnelly 
Signed-off-by: Jeff Layton 
---
M src/FSAL/FSAL_CEPH/handle.c
1 file changed, 7 insertions(+), 8 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/20/403320/1
--
To view, visit https://review.gerrithub.io/403320
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I628f0a694f1a0b26e9d76203a59e4493c0a8e25e
Gerrit-Change-Number: 403320
Gerrit-PatchSet: 1
Gerrit-Owner: Jeff Layton 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Multiprotocol support in ganesha

2018-03-09 Thread Frank Filz
Yea, a good point about Ganesha holding fd open. If we don’t hold fd open for 
getattr/setattr, NFS v4 should be able to function well without holding fd open.

 

The idea of using setlease would still be good for NFS v3 (or any NFS v4 
clients that use special stateids to do anonymous I/O). One thing we could do 
is have FSAL_VFS do a setlease on each global fd it opens. It then could use an 
avltree to map fd back to obj_handle. It actually does not need to notify 
MDCACHE – EXCEPT for the counting of open fd (which we need to make NOT 
something that MDCACHE needs to manage the count of, it just needs to see the 
count so it can reap open fd).

 

On the other hand, if we separate the open fd LRU from the object cache LRU as 
I have proposed, in a multi-protocol environment the open fd LRU could be made 
much more aggressive so open global fd get closed much quicker after use.

 

Frank

 

From: pradeep.tho...@gmail.com [mailto:pradeep.tho...@gmail.com] On Behalf Of 
Pradeep
Sent: Thursday, March 8, 2018 7:37 PM
To: Frank Filz 
Cc: DENIEL Philippe ; nfs-ganesha-devel 

Subject: Re: [Nfs-ganesha-devel] Multiprotocol support in ganesha

 

Hi Frank,

 

I think there are cases where a touch/chmod on a file causes it to be opened 
forever in ganesha (until we are over the FD limit). This  will prevent other 
applications such as Samba from opening the file and sharing it through SMB 
protocol to windows clients. Samba uses SETLEASE to get notified if other 
applications open the file. Ganesha also may have to do something similar to 
get notified and close cached FDs. The challenge is to map FD to MDCACHE object 
so that it can call close() on that object. One way to map FD to a path (using 
readlink) and then convert that to export and relative path. Given export and 
relative path, FSAL could return a key for that which can be used to lookup the 
object in MDCACHE. Is there a better way?

 

Here is the relevant Samba code that implements SETLEASE and signal handler: 
https://github.com/samba-team/samba/blob/master/source3/smbd/oplock_linux.c

 

Thanks,

Pradeep

 

On Wed, Mar 7, 2018 at 7:18 AM, Frank Filz mailto:ffilz...@mindspring.com> > wrote:

Unfortunately out in the real world, people want to mix POSIX and Microsoft 
semantics…

 

So we do the best we can.

 

I wonder how much of the multi-protocol use falls into two camps:

 

1.  Simple file sharing, for example, I run Virtual Box on a Windows 
machine to get Linux VMs. I mount my Windows “My Documents” into the Linux VM 
so I can easily pass files back and forth. Share reservations work and prevent 
Linux from trampling things if Microsoft Word happens to have a file open.

2.  Some kind of database application with clients on multiple platforms. 
Such application will use appropriate synchronizing operations including byte 
range locks in a way that does not depend on any of the peculiarities of POSIX 
or Microsoft semantics.

 

Yea, we can try to make things like delete of open files work as best as 
possible, but either of those two use cases won’t have any really big surprises 
if the integration for a peculiarity like deleting an open file isn’t perfect.

 

Now one use case that may not go so well is an application that uses lock files 
to indicate which client or process is active…

 

I’ve seen folks paranoid that POSIX byte range locks are only advisory, but 
outside of a program bug, if a POSIX app and a Windows app are both using byte 
range locks to protect records they are changing, it doesn’t matter that POSIX 
thinks they are only advisory… Of course a server CAN enforce the range locks 
(and NFS v4 even supports this idea), and yea, that will break a POSIX app that 
thought it didn’t actually need to respect the locks.

 

I think overall Ganesha does a pretty good job. There are places where it can 
do better.

 

While talk of having an SMB front end to Ganesha are fun, I doubt we will ever 
do that, and I don’t think it’s necessary to have sufficiently good integration 
to cover 99.9% of the possible multi-protocol use cases.

 

Frank

 

From: DENIEL Philippe [mailto:philippe.den...@cea.fr 
 ] 
Sent: Wednesday, March 7, 2018 6:28 AM
To: Pradeep mailto:pradeeptho...@gmail.com> >; 
nfs-ganesha-devel mailto:nfs-ganesha-devel@lists.sourceforge.net> >
Subject: Re: [Nfs-ganesha-devel] Multiprotocol support in ganesha

 

Hi,

from a "stratospheric" point of view, I see a potentially big issue ahead for 
such a feature : FSAL has been designed to be quite close to POSIX behavior, 
CIFS follows the Microsoft File System semantics, which is pretty different 
from POSIX.
My experience with 9p integration in Ganesha shows some issues in POSIX corner 
cases (like "delete on close" situations), I can't imagine what integrating a 
CIFS support would mean. 
Years ago, Tom Tapley came to bake-a-thon (this was a few months after he 
joined Microsoft Research) and he talked about issues met by Mi

Re: [Nfs-ganesha-devel] review request https://review.gerrithub.io/#/c/390652/

2018-03-09 Thread Frank Filz
Matt had called for additional discussion on this, so let's get that discussion 
going.

Could you address Matt's questions?

Frank

> -Original Message-
> From: Satya Prakash GS [mailto:g.satyaprak...@gmail.com]
> Sent: Friday, March 9, 2018 4:17 AM
> To: nfs-ganesha-devel@lists.sourceforge.net
> Cc: Malahal Naineni ; Frank Filz
> 
> Subject: review request https://review.gerrithub.io/#/c/390652/
> 
> Can somebody please review this change :
> https://review.gerrithub.io/#/c/390652/
> 
> It addresses this issue :
> 
> Leak in DRC when client disconnects nfs_dupreq_finish doesn't call put_drc
> always. It does only if it meets certain criteria (drc_should_retire). This 
> can leak
> the drc and the dupreq entries within it when the client disconnects. More
> information can be found here : https://sourceforge.net/p/nfs-
> ganesha/mailman/message/35815930/
> 
> 
> 
> Main idea behind the change.
> 
> Introduced a new drc queue which holds all the active drc objects (tcp_drc_q 
> in
> drc_st).
> Every new drc is added to tcp_drc_q initially. Eventually it is moved to
> tcp_drc_recycle_q. Drcs are freed from tcp_drc_recycle_q. Every drc is either 
> in
> the active drc queue or in the recycle queue.
> 
> DRC Refcount and transition from active drc to recycle queue :
> 
> Drc refcnt is initialized to 2. In dupreq_start, increment the drc refcount. 
> In
> dupreq_rele, decrement the drc refcnt. Drc refcnt is also decremented in
> nfs_rpc_free_user_data. When drc refcnt goes to 0 and drc is found not in use
> for 10 minutes, pick it up and free the entries in iterations of 32 items at 
> at time.
> Once the dupreq entries goes to 0, remove the drc from tcp_drc_q and add it to
> tcp_drc_recycle_q. Today, entries added to tcp_drc_recycle_q are cleaned up
> periodically. Same logic should clean up these entries too.
> 
> Thanks,
> Satya.


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] review request https://review.gerrithub.io/#/c/390652/

2018-03-09 Thread Satya Prakash GS
I had replied to the comments on the same day Matt posted. My replies show
as drafts, looks like I have to publish them. I don't see a publish button
either. Can you guys help me out.

Thanks,
Satya.

On 9 Mar 2018 20:48, "Frank Filz"  wrote:

> Matt had called for additional discussion on this, so let's get that
> discussion going.
>
> Could you address Matt's questions?
>
> Frank
>
> > -Original Message-
> > From: Satya Prakash GS [mailto:g.satyaprak...@gmail.com]
> > Sent: Friday, March 9, 2018 4:17 AM
> > To: nfs-ganesha-devel@lists.sourceforge.net
> > Cc: Malahal Naineni ; Frank Filz
> > 
> > Subject: review request https://review.gerrithub.io/#/c/390652/
> >
> > Can somebody please review this change :
> > https://review.gerrithub.io/#/c/390652/
> >
> > It addresses this issue :
> >
> > Leak in DRC when client disconnects nfs_dupreq_finish doesn't call
> put_drc
> > always. It does only if it meets certain criteria (drc_should_retire).
> This can leak
> > the drc and the dupreq entries within it when the client disconnects.
> More
> > information can be found here : https://sourceforge.net/p/nfs-
> > ganesha/mailman/message/35815930/
> >
> > 
> >
> > Main idea behind the change.
> >
> > Introduced a new drc queue which holds all the active drc objects
> (tcp_drc_q in
> > drc_st).
> > Every new drc is added to tcp_drc_q initially. Eventually it is moved to
> > tcp_drc_recycle_q. Drcs are freed from tcp_drc_recycle_q. Every drc is
> either in
> > the active drc queue or in the recycle queue.
> >
> > DRC Refcount and transition from active drc to recycle queue :
> >
> > Drc refcnt is initialized to 2. In dupreq_start, increment the drc
> refcount. In
> > dupreq_rele, decrement the drc refcnt. Drc refcnt is also decremented in
> > nfs_rpc_free_user_data. When drc refcnt goes to 0 and drc is found not
> in use
> > for 10 minutes, pick it up and free the entries in iterations of 32
> items at at time.
> > Once the dupreq entries goes to 0, remove the drc from tcp_drc_q and add
> it to
> > tcp_drc_recycle_q. Today, entries added to tcp_drc_recycle_q are cleaned
> up
> > periodically. Same logic should clean up these entries too.
> >
> > Thanks,
> > Satya.
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] review request https://review.gerrithub.io/#/c/390652/

2018-03-09 Thread Matt Benjamin
Hi Satya,

To reply, to a reply on the top level (can even be blank), all your
inline comments will publish then.

Matt

On Fri, Mar 9, 2018 at 11:21 AM, Satya Prakash GS
 wrote:
> I had replied to the comments on the same day Matt posted. My replies show
> as drafts, looks like I have to publish them. I don't see a publish button
> either. Can you guys help me out.
>
> Thanks,
> Satya.
>
> On 9 Mar 2018 20:48, "Frank Filz"  wrote:
>>
>> Matt had called for additional discussion on this, so let's get that
>> discussion going.
>>
>> Could you address Matt's questions?
>>
>> Frank
>>
>> > -Original Message-
>> > From: Satya Prakash GS [mailto:g.satyaprak...@gmail.com]
>> > Sent: Friday, March 9, 2018 4:17 AM
>> > To: nfs-ganesha-devel@lists.sourceforge.net
>> > Cc: Malahal Naineni ; Frank Filz
>> > 
>> > Subject: review request https://review.gerrithub.io/#/c/390652/
>> >
>> > Can somebody please review this change :
>> > https://review.gerrithub.io/#/c/390652/
>> >
>> > It addresses this issue :
>> >
>> > Leak in DRC when client disconnects nfs_dupreq_finish doesn't call
>> > put_drc
>> > always. It does only if it meets certain criteria (drc_should_retire).
>> > This can leak
>> > the drc and the dupreq entries within it when the client disconnects.
>> > More
>> > information can be found here : https://sourceforge.net/p/nfs-
>> > ganesha/mailman/message/35815930/
>> >
>> > 
>> >
>> > Main idea behind the change.
>> >
>> > Introduced a new drc queue which holds all the active drc objects
>> > (tcp_drc_q in
>> > drc_st).
>> > Every new drc is added to tcp_drc_q initially. Eventually it is moved to
>> > tcp_drc_recycle_q. Drcs are freed from tcp_drc_recycle_q. Every drc is
>> > either in
>> > the active drc queue or in the recycle queue.
>> >
>> > DRC Refcount and transition from active drc to recycle queue :
>> >
>> > Drc refcnt is initialized to 2. In dupreq_start, increment the drc
>> > refcount. In
>> > dupreq_rele, decrement the drc refcnt. Drc refcnt is also decremented in
>> > nfs_rpc_free_user_data. When drc refcnt goes to 0 and drc is found not
>> > in use
>> > for 10 minutes, pick it up and free the entries in iterations of 32
>> > items at at time.
>> > Once the dupreq entries goes to 0, remove the drc from tcp_drc_q and add
>> > it to
>> > tcp_drc_recycle_q. Today, entries added to tcp_drc_recycle_q are cleaned
>> > up
>> > periodically. Same logic should clean up these entries too.
>> >
>> > Thanks,
>> > Satya.
>>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] review request https://review.gerrithub.io/#/c/390652/

2018-03-09 Thread Satya Prakash GS
Aah. Now I could publish the comments. Thank you Matt.

Regards,
Satya.

On Fri, Mar 9, 2018 at 9:53 PM, Matt Benjamin  wrote:
> Hi Satya,
>
> To reply, to a reply on the top level (can even be blank), all your
> inline comments will publish then.
>
> Matt
>
> On Fri, Mar 9, 2018 at 11:21 AM, Satya Prakash GS
>  wrote:
>> I had replied to the comments on the same day Matt posted. My replies show
>> as drafts, looks like I have to publish them. I don't see a publish button
>> either. Can you guys help me out.
>>
>> Thanks,
>> Satya.
>>
>> On 9 Mar 2018 20:48, "Frank Filz"  wrote:
>>>
>>> Matt had called for additional discussion on this, so let's get that
>>> discussion going.
>>>
>>> Could you address Matt's questions?
>>>
>>> Frank
>>>
>>> > -Original Message-
>>> > From: Satya Prakash GS [mailto:g.satyaprak...@gmail.com]
>>> > Sent: Friday, March 9, 2018 4:17 AM
>>> > To: nfs-ganesha-devel@lists.sourceforge.net
>>> > Cc: Malahal Naineni ; Frank Filz
>>> > 
>>> > Subject: review request https://review.gerrithub.io/#/c/390652/
>>> >
>>> > Can somebody please review this change :
>>> > https://review.gerrithub.io/#/c/390652/
>>> >
>>> > It addresses this issue :
>>> >
>>> > Leak in DRC when client disconnects nfs_dupreq_finish doesn't call
>>> > put_drc
>>> > always. It does only if it meets certain criteria (drc_should_retire).
>>> > This can leak
>>> > the drc and the dupreq entries within it when the client disconnects.
>>> > More
>>> > information can be found here : https://sourceforge.net/p/nfs-
>>> > ganesha/mailman/message/35815930/
>>> >
>>> > 
>>> >
>>> > Main idea behind the change.
>>> >
>>> > Introduced a new drc queue which holds all the active drc objects
>>> > (tcp_drc_q in
>>> > drc_st).
>>> > Every new drc is added to tcp_drc_q initially. Eventually it is moved to
>>> > tcp_drc_recycle_q. Drcs are freed from tcp_drc_recycle_q. Every drc is
>>> > either in
>>> > the active drc queue or in the recycle queue.
>>> >
>>> > DRC Refcount and transition from active drc to recycle queue :
>>> >
>>> > Drc refcnt is initialized to 2. In dupreq_start, increment the drc
>>> > refcount. In
>>> > dupreq_rele, decrement the drc refcnt. Drc refcnt is also decremented in
>>> > nfs_rpc_free_user_data. When drc refcnt goes to 0 and drc is found not
>>> > in use
>>> > for 10 minutes, pick it up and free the entries in iterations of 32
>>> > items at at time.
>>> > Once the dupreq entries goes to 0, remove the drc from tcp_drc_q and add
>>> > it to
>>> > tcp_drc_recycle_q. Today, entries added to tcp_drc_recycle_q are cleaned
>>> > up
>>> > periodically. Same logic should clean up these entries too.
>>> >
>>> > Thanks,
>>> > Satya.
>>>
>>
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>
>
>
>
> --
>
> Matt Benjamin
> Red Hat, Inc.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
>
> http://www.redhat.com/en/technologies/storage
>
> tel.  734-821-5101
> fax.  734-769-8938
> cel.  734-216-5309

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] submitting LizardFS FSAL to nfs-ganesha

2018-03-09 Thread Paweł Kalinowski
Hello Kaleb,

DanG work is very important for us as our current C++ code can not be
directly included into your repo. We have a C wrapper on pipe, but have no
actual experience in preparing seamless integration with the repository in
question. How can we merge our efforts?

Best regards,

-- 
Pawel Kalinowski
IT Director
tel. +48 601 813 006

Skytechnology sp. z o.o.
Miłobędzka 35
02-634 Warszawa

On Wed, Mar 7, 2018 at 12:10 PM, Kaleb S. KEITHLEY 
wrote:

> On 03/07/2018 06:07 AM, Szymon Haly wrote:
> > Hi Kaleb,
> >
> > Hope you are well.
> > We are planning on releasing new version on Monday 26/03/2018.
> > It will also include some fixes in our implementation of Ganesha. The
> > guys told me that it should be pretty easy to add to your project.
> >
> > Will let you know.
>
> Excellent.
>
> I actually thought you had done that once already. I thought DanG told
> me he was working on merging your FSAL.
>
> Regardless, it's good to have more contributions and contributors.
>
> Thanks and regards,
>
> --
>
> Kaelb
>
>
> >
> > On Thu, Feb 15, 2018 at 6:36 PM, Szymon Haly
> > mailto:szymon.h...@skytechnology.pl>>
> wrote:
> >
> > Hey Kaleb,
> > Hope you are well.
> >
> > Sorry for delay - we are working on the patch to make it perfect to
> > save us time during testing.
> > It should be ready some day next week - will keep you posted.
> >
> > On Mon, Feb 5, 2018 at 3:33 AM, Szymon Haly
> > mailto:szymon.h...@skytechnology.pl>>
> > wrote:
> >
> > Hi Kaleb,
> >
> > Thanks so much for an awesome time. I forwarded your email to
> > the team and we will proceed.
> >
> > On Feb 5, 2018 10:50, "Kaleb S. KEITHLEY"  > > wrote:
> >
> > Hi,
> >
> > Here's process for submitting your code via gerrithub.
> >
> > https://github.com/nfs-ganesha/nfs-ganesha/wiki/
> DevPolicy#Pushing_to_gerrit
> >  DevPolicy#Pushing_to_gerrit>
> >
> > Good talking with you at FOSDEM.
> >
> > Regards,
> >
> > --
> >
> > Kaleb
> >
> >
> >
> >
> > --
> > Szymon Haly
> > CSO
> > P: +48221120519
> > M: +48 793 983 133
> >
> >
> >
> >
> > --
> > Szymon Haly
> > CSO
> > P: +48221120519
> > M: +48 793 983 133
> >
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] About referrals in FSAL_VFS

2018-03-09 Thread Sriram Patil
Hi Frank,

Now that the referral fixes for pynfs are approved (not merged yet), I wanted 
to work on the subfsal layer to extend this to subfsal and allow having 
referral points as desired. As we had discussed before, it would be a good idea 
to store fs locations in the attrlist, but a few days back on IRC you mentioned 
that we do not want to jump onto that as yet. So, I was thinking of supporting 
referrals and fs_locations at subfsal layer a little differently which will not 
involve a lot of changes.

Currently, the vfs_fsal_obj_handle stores fsroot and fslocations as part of 
directory (hdl->u.directory). I was thinking of moving these out of the union 
and make them first class variable in the structure or the union inside it. 
This way the fsroot and fslocations can be stored for any file type, for 
example a symlink. I will be adding some subfsal callbacks for populating and 
retrieving the fslocation details as discussed before.

Let me know if this looks like a good thing to do or should we think about 
fslocations in attrlist?

Thanks,
Sriram

From: Sriram Patil 
Date: Friday, February 9, 2018 at 10:30 AM
To: Frank Filz , 
"nfs-ganesha-devel@lists.sourceforge.net" 

Subject: Re: [Nfs-ganesha-devel] About referrals in FSAL_VFS

Okay, so there are a bunch of things that we need to do. I will just list them 
down here, let me know if I miss anything.

  1.  Fs locations can be moved to attrlist and FSAL can fill it as part of 
getattr if required
  2.  We need ref counting for fs locations for handling attr copy where 
multiple objects can hold reference to fs locations
  3.  Have a validity bit for fs locations. Allow a dbus command to invalidate 
all fs_locations to be able to refresh the locations for dynamic updates
  4.  Add methods/callbacks to subfsal mechanism to allow subfsals to choose 
how referrals work for them. This should also allow a way to follow the default 
VFS mechanism

Thanks,
Sriram

From: Frank Filz 
Date: Friday, February 9, 2018 at 12:05 AM
To: Sriram Patil , 
"nfs-ganesha-devel@lists.sourceforge.net" 

Subject: RE: [Nfs-ganesha-devel] About referrals in FSAL_VFS

There are places the struct attrlist gets copied, if so, either the referral 
needs to be duplicated in that copy, or it needs to be a single entity with a 
refcount.

One advantage of adding the referral to the struct attrlist is that MDCACHE 
will manage cache life of it.

It may or may not be worth having a separate cache validity bit for it 
(advantage of doing that is then 9P and V3 GETATTR won’t need to fetch it. 
Also, conceivably we could have a dBUS trigger to invalidate all referral 
attributes (on the other hand, without a separate validity bit, that dBUS 
command could just invalidate the attributes of any object that had a non-empty 
referral, though that would delay when a fs_locations attribute becomes visible 
if added to an object that previously didn’t have one…) I think I just talked 
myself into a separate validity bit. We will want to be able to tell Ganesha to 
refresh fs_locations, it can march through all the cached inodes and invalidate 
the fs_locations attribute (whether it was empty or not) thus allowing 
dynamically moving a sub-tree.

Frank

From: Sriram Patil [mailto:srir...@vmware.com]
Sent: Thursday, February 8, 2018 9:33 AM
To: Frank Filz ; 
nfs-ganesha-devel@lists.sourceforge.net
Subject: Re: [Nfs-ganesha-devel] About referrals in FSAL_VFS

attrlist changes sound good. I am trying to figure out the necessity of a 
separate cache and ref counts. As far as I see, there will not be multiple 
references to fs_locations (which will be just a string in attrlist).

What do you mean by xattr utility is being added in nfs-utils? Is that 
nfs-utils package? Is it just a package dependency or something else?

Thanks,
Sriram

From: Frank Filz mailto:ffilz...@mindspring.com>>
Date: Thursday, February 8, 2018 at 8:34 PM
To: Sriram Patil mailto:srir...@vmware.com>>, 
"nfs-ganesha-devel@lists.sourceforge.net"
 
mailto:nfs-ganesha-devel@lists.sourceforge.net>>
Subject: RE: [Nfs-ganesha-devel] About referrals in FSAL_VFS

Ok, the additional sub_fsal ops look workable.

For the protocol layer stuff, we need a quick way to check if an object is a 
junction. One option is to resurrect the fsal_obj_handle JUNCTION type. Another 
option is to add fs_locations to struct attrlist and then an FSAL supporting 
referral objects would just set that attribute (reading the xattr for those 
implementations). That would mean the fs_locations attribute is always fetched, 
perhaps slowing down getattrs for all referral objects even if the caller isn’t 
going to trigger use of the referral, but one hopes there aren’t too many 
referrals (and most access, at least via NFS v4, will trigger the fs_locations 
anyway).

I think I actually like adding fs_locations to struct attrlist (note that it 
will add a 2nd attribute like ACL that probably should be refcounted etc.)

Re: [Nfs-ganesha-devel] About referrals in FSAL_VFS

2018-03-09 Thread Frank Filz
Hmm, did I sow confusion about fs_locations in attrlist? I think that would be 
the best solution. We do need to work to be careful about tracking the validity 
of different parts of the attrlist, since we don’t necessarily want to fetch 
the fs_locations from the filesystem every time we refresh attributes.

 

I think with fs_locations being in the attrlist, it becomes much easier to have 
a sub-fsal with different handling.


Frank

 

From: Sriram Patil [mailto:srir...@vmware.com] 
Sent: Friday, March 9, 2018 9:09 AM
To: Frank Filz ; 
nfs-ganesha-devel@lists.sourceforge.net
Subject: Re: [Nfs-ganesha-devel] About referrals in FSAL_VFS

 

Hi Frank,

 

Now that the referral fixes for pynfs are approved (not merged yet), I wanted 
to work on the subfsal layer to extend this to subfsal and allow having 
referral points as desired. As we had discussed before, it would be a good idea 
to store fs locations in the attrlist, but a few days back on IRC you mentioned 
that we do not want to jump onto that as yet. So, I was thinking of supporting 
referrals and fs_locations at subfsal layer a little differently which will not 
involve a lot of changes.

 

Currently, the vfs_fsal_obj_handle stores fsroot and fslocations as part of 
directory (hdl->u.directory). I was thinking of moving these out of the union 
and make them first class variable in the structure or the union inside it. 
This way the fsroot and fslocations can be stored for any file type, for 
example a symlink. I will be adding some subfsal callbacks for populating and 
retrieving the fslocation details as discussed before.

 

Let me know if this looks like a good thing to do or should we think about 
fslocations in attrlist?

 

Thanks,

Sriram

 

From: Sriram Patil mailto:srir...@vmware.com> >
Date: Friday, February 9, 2018 at 10:30 AM
To: Frank Filz , 
"nfs-ganesha-devel@lists.sourceforge.net 
 " 
mailto:nfs-ganesha-devel@lists.sourceforge.net> >
Subject: Re: [Nfs-ganesha-devel] About referrals in FSAL_VFS

 

Okay, so there are a bunch of things that we need to do. I will just list them 
down here, let me know if I miss anything.

1.  Fs locations can be moved to attrlist and FSAL can fill it as part of 
getattr if required
2.  We need ref counting for fs locations for handling attr copy where 
multiple objects can hold reference to fs locations
3.  Have a validity bit for fs locations. Allow a dbus command to 
invalidate all fs_locations to be able to refresh the locations for dynamic 
updates
4.  Add methods/callbacks to subfsal mechanism to allow subfsals to choose 
how referrals work for them. This should also allow a way to follow the default 
VFS mechanism

 

Thanks,

Sriram

 

From: Frank Filz mailto:ffilz...@mindspring.com> >
Date: Friday, February 9, 2018 at 12:05 AM
To: Sriram Patil mailto:srir...@vmware.com> >, 
"nfs-ganesha-devel@lists.sourceforge.net 
 " 
mailto:nfs-ganesha-devel@lists.sourceforge.net> >
Subject: RE: [Nfs-ganesha-devel] About referrals in FSAL_VFS

 

There are places the struct attrlist gets copied, if so, either the referral 
needs to be duplicated in that copy, or it needs to be a single entity with a 
refcount.

 

One advantage of adding the referral to the struct attrlist is that MDCACHE 
will manage cache life of it.

 

It may or may not be worth having a separate cache validity bit for it 
(advantage of doing that is then 9P and V3 GETATTR won’t need to fetch it. 
Also, conceivably we could have a dBUS trigger to invalidate all referral 
attributes (on the other hand, without a separate validity bit, that dBUS 
command could just invalidate the attributes of any object that had a non-empty 
referral, though that would delay when a fs_locations attribute becomes visible 
if added to an object that previously didn’t have one…) I think I just talked 
myself into a separate validity bit. We will want to be able to tell Ganesha to 
refresh fs_locations, it can march through all the cached inodes and invalidate 
the fs_locations attribute (whether it was empty or not) thus allowing 
dynamically moving a sub-tree.

 

Frank

 

From: Sriram Patil [mailto:srir...@vmware.com] 
Sent: Thursday, February 8, 2018 9:33 AM
To: Frank Filz mailto:ffilz...@mindspring.com> >; 
nfs-ganesha-devel@lists.sourceforge.net 
 
Subject: Re: [Nfs-ganesha-devel] About referrals in FSAL_VFS

 

attrlist changes sound good. I am trying to figure out the necessity of a 
separate cache and ref counts. As far as I see, there will not be multiple 
references to fs_locations (which will be just a string in attrlist).

 

What do you mean by xattr utility is being added in nfs-utils? Is that 
nfs-utils package? Is it just a package dependency or something else?

 

Thanks,

Sriram

 

From: Frank Filz mailto:ffilz...@mindspring.com> >
Date: Thursday, February 8, 2018 at 8:34 PM
To:

[Nfs-ganesha-devel] Announce Push of V2.7-dev.3

2018-03-09 Thread Frank Filz
Branch next

 

Tag:V2.7-dev.3

 

NOTE: This merge includes an ntirpc pullup, please update your submodule

 

Release Highlights

 

* Pullup NTIRPC through #114

 

* strip out legacy dirent cache and change avl tree to by name

 

* add additional dbus command scripts

 

* MDCACHE - Initialize dirent structs in entry early

 

* Fixing the fslocations tests failures as seen in pynfs

 

* Move fsal_staticfsinfo_t into fsal_module.

 

* PROXY: add sample config file

 

* FSAL_CEPH: fix libcephfs result handling in read2/write2 ops

 

Signed-off-by: Frank S. Filz 

 

Contents:

 

0ae8ada Frank S. Filz V2.7-dev.3

82ca42e William Allen Simpson Pullup NTIRPC through #114

405748e Jeff Layton FSAL_CEPH: fix libcephfs result handling in read2/write2
ops

6829aaa Supriti Singh PROXY: add sample config file

c5bfc87 Supriti Singh Move fsal_staticfsinfo_t into fsal_module.

1be505e Sriram Patil Fixing the fslocations tests failures as seen in pynfs

4b461bd Daniel Gryniewicz MDCACHE - Initialize dirent structs in entry early

2843117 Daniel Gryniewicz Add tracepoints for NFS4 session refcounts

7e616fa Malahal Naineni Add purging gids cache to ganesha_mgr script

bb6aeb4 Malahal Naineni Add purging idmapper cache to ganesha_mgr script

b8f5ea8 Malahal Naineni Add auto expiration of idmap cache entry

cf61bb3 Frank S. Filz MDCACHE: Improve debug of mdcache dirent avl

180b032 Frank S. Filz MDCACHE: Change dirent AVL tree to by name rather than
by hash

566fb7e Frank S. Filz MDCACHE: Strip out legacy dirent cache

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Daylight savings starts this weekend

2018-03-09 Thread Frank Filz
For those outside the US the conference call is going to be starting one hour 
earlier next week. Europe will catch up in a week or two I think.

Unfortunately since our kids school schedule follows the time change I cannot 
keep the conference call on a fixed UST time. 

Frank 

Sent from my iPhone

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel