Oh, good, I think you will be well on your way. Whence can be a cookie 
(uint64_t) or dirent name. FSAL_RGW uses whence is name. Either way, you do 
need to provide a stable uint64_t cookie value for each entry (the NFS client 
will use this to resume readdir, if we have that dirent cached, then we can 
resume, otherwise, Ganesha mdcache issues readdir calls to seek the cookie). If 
you can compute the uint64_t cookie from a directory handle and dirent name, 
and can determine ordering within the directory, your FSAL can also support the 
compute cookie API which allows Ganesha to avoid invalidating it’s dirent cache 
on file creation.

 

The lookup FSAL API method is called when the NFS client does a LOOKUP 
operation (including OPEN) and the dirent cache does not contain a valid dirent 
for that directory handle/name combination.

 

Frank

 

From: Aurelien RAINONE [mailto:aurelien.rain...@gmail.com] 
Sent: Wednesday, January 3, 2018 12:55 PM
To: Frank Filz <ffilz...@mindspring.com>; 
Nfs-ganesha-devel@lists.sourceforge.net
Subject: RE: [Nfs-ganesha-devel] Implement a FSAL for S3-compatible storage

 

 

On 3 Jan 2018 21:02, "Frank Filz" <ffilz...@mindspring.com 
<mailto:ffilz...@mindspring.com> > wrote:

> From: Aurelien RAINONE [mailto:aurelien.rain...@gmail.com 
> <mailto:aurelien.rain...@gmail.com> ]
> Sent: Wednesday, January 3, 2018 10:58 AM
> To: Nfs-ganesha-devel@lists.sourceforge.net 
> <mailto:Nfs-ganesha-devel@lists.sourceforge.net> 
> Subject: [Nfs-ganesha-devel] Implement a FSAL for S3-compatible storage

>
> To follow up on the development on an FSAL for S3, I have some doubts and
> questions I'd like to share.
>
> Apart from its full path, S3 doesn't have the concept of file descriptor, I
> mean, there's nothing else than the full path that I can provide to S3 in 
> order
> to get attribute of content of a specific object.
>
> I have some doubts regarding the implementation of the S3 fsal object
> handle (s3_fsal_obj_handle).
>
> Should s3_fsal_obj_handle be very simple, for example should it only contain
> a key that maps to the full S3 filename, in an key-value store.
> Or on the contrary, should the handle implement a tree like structure, like I
> saw in FSAL_MEM?
>
> Or something in between, but what?
>
> Having a very simple handle has some advantages but may require some
> more frequent network calls, for example readdir won't have any kind of
> information about the content of the directory.
> Having a whole tree-like structure in the handle would allow to have direct
> access to directory content, but isn't that the role of ganesha cache to do
> that?
>
> My questions probably shows that I have problems to understand the
> responsability of my FSAL implementation regarding the cache. Who does
> what, what doesn't do what?

Things you will have to consider for readdir:

If you want to do a tree structure, can you enumerate the objects in an order 
such that you get all the objects that would be in a given directory in 
sequence without getting other objects (if your enumeration is in sorted name 
order, that would work). More importantly, can you restart enumeration of 
objects at an arbitrary position (Ganesha V2.6 supports passing the last 
enumerated name to restart enumeration if available).

In order for Ganesha's cache to implement a tree structure, which you will need 
to construct if you don't want the S3 store to look like a single huge 
directory to NFS clients (which may be fine), your FSAL needs to be prepared to 
break the object names up into directory structures and then emulate the 
directories implied by the S3 object names, including producing a persistent 
128 byte or smaller handle (64 byte or smaller if you want to support NFS v3, 
and actually 5 bytes less because Ganesha adds 5 bytes to the handle produced 
by the FSAL). The handles need to be persistent across Ganesha restart to 
support NFS semantics (and across nodes if you have a cluster with failover).

One challenge if you synthesize a directory structure is generating mtime or 
change attribute for the synthesized directories, especially if objects might 
be created by a process other than Ganesha. Without some mechanism for knowing 
the dirent cache is invalid, both clients and Ganesha's own dirent cache will 
miss new, deleted, and renamed objects.

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

 

Ok I see.

 

Well a tree-like structure is a requirement for the FSAL, S3 supports 
simili-folders when using ListBucket API, it's actually possible to list the 
immediate children of any folder-like object by specifying a prefix and a 
delimiter. Also, S3 supports re-starting enumeration at an arbitrary position, 
that's what they call a marker. It is the role of the "whence fsal_cookie" 
argument of readdir right?

 

So basically to keep Ganesha synced with the S3 folders I should form my handle 
so that it points to a tree-like strucure, I would fill this tree during 
readdir calls by performing the S3 network requests. I should update mtime of a 
folder whenever its content has changed?

 

What is the role of lookup FSAL call regarding readdir? 

 

Thanks a lot

 

 

 

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to