On 01/10/18 16:04, Frank Filz wrote:
Hi Frank, others,

Frank Filz wrote on Tue, Jan 09, 2018 at 03:19:46PM -0800:
All in all, these functions are not a great fit for the FSAL API so
I'm not sure it would be a good solution. Forcing some of the
functions into FSAL methods would require some code duplication that
loses some of the advantage of the mechanism. There would also be a
question of how things like the file descriptors and fsal_filesystems
are shared between the main FSAL_ VFS and the underlying stacked FSAL.
Actually we were thinking of making LUSTRE a stackable FSAL under VFS as
well, so when this was brought up on the phone I thought it was a good idea,
but now I'm reading this I have a more basic question-- is there still any use
for the XFS fsal?
Same question for PANFS, I guess, could we tell if anyone is still using that?
The last commit that was specific to panfs was a license change in 2015...
There have been many likely untested changes (multi-fd,
mdcache...) since...


Historically XFS exposed handle functions through libhandle before the
kernel VFS did. Now it looks like both linux and freebsd have a variant to
natively handle that, so we only have one FSAL to build either way with
slightly different guts that can be abstracted as we currently do
transparently.
If it's too much effort to maintain XFS and PANFS and they aren't
used/tested anymore, my vote would be to just rip it out.
If we don't need to support older kernels, or seamless migration (no client 
unmount), XFS is unnecessary, and in fact has the danger of less testing, 
though really it's using almost the same mechanism as we use for FreeBSD to 
change up the system calls, dealing with fsid differences, and differences in 
exactly how the handle is managed.

PANFS should be dead, Panasas chose to go with the FreeBSD kernel NFS server 
for their NFS implementation.

I'm not even sure we haven’t managed to break something on FreeBSD since I last 
ran it. I know the folks at iXsystems were exploring Ganesha but they have 
dropped out of active exploration.

Re-LUSTRE, I hadn't given it much practical thought but I think stacking will
work for us. We'd be using pure VFS calls and only add a few hooks around
open mostly, the only trick will be to get the subfsal to give us the opened fd
somehow so we can do additional checks on it, but if we enforce stacking like
mdcache (e.g. lustre MUST be above VFS) then it should be possible.
For such intimate stacking, I see no issue with the stacked FSAL peeking into 
the underlying FSAL's structures (fsal_obj_handle and file descriptor extension 
to state_t for what you need). In this case, stacking is being used to create 
the equivalent of a C++ inheritance (though the way MDCACHE stacks on top would 
be not so easy to do with C++ inheritance...).

There is one point though, we'd ideally want mdcache>lustre>vfs, so we'd
have to do the stacking manually from C code and not let folks do stacking in
their config.
Yea, when we do things like this, we need to manage the stacking carefully.

Frank

Hi Frank and Dominique,

The stacking is actually working properly. As example, if we set a config like this,
===
EXPORT
{
[...]
  # Exporting FSAL
  FSAL {
    Name = NULL;
    FSAL {
      Name = VFS;
    }
  }
}
===
we get mdcache>null>vfs.

But for our initial need here at CEA (ie : implementing a lustre hsm restore with a ERR_FSAL_DELAY when accessing a released file through ganesha FSA_VFS to an underlying Lustre file system), we are moving from a Stackable FSAL to a VFS subfsal. By adding code inside FSAL_VFS, we would be easily able to get the Lustre HSM status by using the fd stored inside FSAL_VFS. (With a Stackable fsal on top of VFS, we see no easy way to get path or fd of our file.)



Regards,
Patrice

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to