Re: [ceph-users] Ceph and NFS

2016-01-19 Thread Arthur Liu
On Tue, Jan 19, 2016 at 12:36 PM, Gregory Farnum  wrote:

> > I've found that using knfsd does not preserve cephfs directory and file
> > layouts, but using nfs-ganesha does. I'm currently using nfs-ganesha
> 2.4dev5
> > and seems stable so far.
>
> Can you expand on that? In what manner is it not preserving directory
> and file layouts?
> -Greg
>
> My mistake - I wrongly assumed that the directories did not preserve
layout - it was only subdirectories created under a directory that had a
ceph.dir.layout xattr did not have ceph.dir.layout xattr. Files created
under a subdir that does not have ceph.dir.layout still inherits the file
layout. The same applies to kernel mounted as well as via knfsd, so it does
preserve directory and file layouts. On a separate note, there's no way of
knowing what the default ceph.dir.layout is for a subdirectory unless you
traverse up the tree.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and NFS

2016-01-18 Thread Arthur Liu
On Mon, Jan 18, 2016 at 11:34 PM, Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:

> Hi,
>
> On 18.01.2016 10:36, david wrote:
>
>> Hello All.
>> Does anyone provides Ceph rbd/rgw/cephfs through NFS?  I have a
>> requirement about Ceph Cluster which needs to provide NFS service.
>>
>
> We export a CephFS mount point on one of our NFS servers. Works out of the
> box with Ubuntu Trusty, a recent kernel and kernel-based cephfs driver.
>
> ceph-fuse did not work that well, and using nfs-ganesha 2.2 instead of
> standard kernel based NFSd resulted in segfaults and permissions problems.


I've found that using knfsd does not preserve cephfs directory and file
layouts, but using nfs-ganesha does. I'm currently using nfs-ganesha
2.4dev5 and seems stable so far.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CephFS with cache tiering - reading files are filled with 0s

2015-09-02 Thread Arthur Liu
Hi,

I am experiencing an issue with CephFS with cache tiering where the kernel
clients are reading files filled entirely with 0s.

The setup:
ceph 0.94.3
create cephfs_metadata replicated pool
create cephfs_data replicated pool
cephfs was created on the above two pools, populated with files, then:
create cephfs_ssd_cache replicated pool,
then adding the tiers:
ceph osd tier add cephfs_data cephfs_ssd_cache
ceph osd tier cache-mode cephfs_ssd_cache writeback
ceph osd tier set-overlay cephfs_data cephfs_ssd_cache

While the cephfs_ssd_cache pool is empty, multiple kernel clients on
different hosts open the same file (the size of the file is small, <10k) at
approximately the same time. A number of the clients from the OS level see
the entire file being empty. I can do a rados -p {cache pool} ls for the
list of files cached, and do a rados -p {cache pool} get {object} /tmp/file
and see the complete contents of the file.
I can repeat this by setting cache-mode to forward, rados -p {cache pool}
cache-flush-evict-all, checking no more objects in cache with rados -p
{cache pool} ls, resetting cache-mode to writeback with an empty pool, and
doing the multiple same file opens.

Has anyone seen this issue? It seems like what may be a race condition
where the object is not yet completely loaded into the cache pool so the
cache pool serves out an incomplete object.
If anyone can shed some light or any suggestions to help debug this issue,
that would be very helpful.

Thanks,
Arthur
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS with cache tiering - reading files are filled with 0s

2015-09-02 Thread Arthur Liu
Hi John and Zheng,

Thanks for the quick replies!
I'm using kernel 4.2. I'll test out that fix.

Arthur

On Wed, Sep 2, 2015 at 10:29 PM, Yan, Zheng <uker...@gmail.com> wrote:

> probably caused by http://tracker.ceph.com/issues/12551
>
> On Wed, Sep 2, 2015 at 7:57 PM, Arthur Liu <arthurhs...@gmail.com> wrote:
> > Hi,
> >
> > I am experiencing an issue with CephFS with cache tiering where the
> kernel
> > clients are reading files filled entirely with 0s.
> >
> > The setup:
> > ceph 0.94.3
> > create cephfs_metadata replicated pool
> > create cephfs_data replicated pool
> > cephfs was created on the above two pools, populated with files, then:
> > create cephfs_ssd_cache replicated pool,
> > then adding the tiers:
> > ceph osd tier add cephfs_data cephfs_ssd_cache
> > ceph osd tier cache-mode cephfs_ssd_cache writeback
> > ceph osd tier set-overlay cephfs_data cephfs_ssd_cache
> >
> > While the cephfs_ssd_cache pool is empty, multiple kernel clients on
> > different hosts open the same file (the size of the file is small, <10k)
> at
> > approximately the same time. A number of the clients from the OS level
> see
> > the entire file being empty. I can do a rados -p {cache pool} ls for the
> > list of files cached, and do a rados -p {cache pool} get {object}
> /tmp/file
> > and see the complete contents of the file.
> > I can repeat this by setting cache-mode to forward, rados -p {cache pool}
> > cache-flush-evict-all, checking no more objects in cache with rados -p
> > {cache pool} ls, resetting cache-mode to writeback with an empty pool,
> and
> > doing the multiple same file opens.
> >
> > Has anyone seen this issue? It seems like what may be a race condition
> where
> > the object is not yet completely loaded into the cache pool so the cache
> > pool serves out an incomplete object.
> > If anyone can shed some light or any suggestions to help debug this
> issue,
> > that would be very helpful.
> >
> > Thanks,
> > Arthur
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com