The working client is running in user space (probably ceph-fuse), while the
non-working client is using kernel mount
发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 Steininger,
Herbert
发送时间: 2017年4月25日 16:44
收件人: ceph-users@lists.ceph.com
主题: [ceph-users] cephfs not writeable on a
Hi Cephers,
We occasionally meet an assertion failure when trying to shutdown an MDS, as
followings:
-14> 2017-01-22 14:13:46.833804 7fd210c58700 2 --
192.168.36.11:6801/2188363 >> 192.168.36.48:6800/42546 pipe(0x558ff3803400
sd=17 :52412 s=4 pgs=227 cs=1 l=1 c=0x558ff3758900).fault (0) Su
Hi John,
In our environment we want to deploy MDS and cephfs client on the same node
(users actually use cifs/nfs to access ceph storage). However,
it takes a long time to recover if the node with active MDS fails, during which
a large part is for the new MDS waiting for all clients reconnect.
T
Thank you for the confirmation, John!
As we have both CIFS&NFS users, I was wishing the feature should be implemented
at the CephFS layer :<
Regards,
---Sandy
> -Original Message-
> From: John Spray [mailto:jsp...@redhat.com]
> Sent: Monday, July 11, 2016 7:28 PM
>
Hi Cephers,
I’m planning to set up samba/nfs based on CephFS kernel mount. The WORM(write
once read many) feature is required but I’m not
sure if CephFS officially supports it, any suggestions? Thanks in advance.
Regards,
---Sandy
Hi Andrey,
You may change your cluster to a previous version of crush profile (e.g.
hammer) by command:
`ceph osd crush tunables hammer`
Or, if you want to only switch off the tunables5, do as the following steps
(not sure if there is a
simpler way :<)
1. `ceph osd getcrushmap -o crushmap`
2. `
L1509
Regards,
---Sandy
> -Original Message-
> From: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com]
> Sent: Thursday, May 12, 2016 5:18 AM
> To: Saverio Proto
> Cc: xusangdi 11976 (RD); ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] RadosGW - Problems running
Hi,
I'm not running a cluster as yours, but I don't think the issue is caused by
you using 2 APIs at the same time.
IIRC the dash thing is append by S3 multipart upload, with a following digit
indicating the number of parts.
You may want to check this reported in s3cmd community:
https://sourcef
> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: Wednesday, March 23, 2016 1:04 AM
> To: xusangdi 11976 (RD)
> Cc: mbenja...@redhat.com; ceph-us...@ceph.com; ceph-de...@vger.kernel.org
> Subject: Re: [ceph-users] About the NFS on RGW
>
>
Hi Matt,
Thank you for the explanation and good luck on the NFS project!
Regards,
---Sandy
> -Original Message-
> From: Matt Benjamin [mailto:mbenja...@redhat.com]
> Sent: Tuesday, March 22, 2016 10:12 PM
> To: xusangdi 11976 (RD)
> Cc: ceph-us...@ceph.com; ceph-de...@
Hi Matt & Cephers,
I am looking for advise on setting up a file system based on Ceph. As CephFS is
not yet productive ready(or I missed some breakthroughs?), the new NFS on
RadosGW should be a promising alternative, especially for large files, which is
what we are most interested in. However, a
Hi Cephers,
Recently when I did some tests of RGW functions I found that the swift key of a
subuser is kept after removing the subuser. As a result, this subuser-swift_key
pair can still pass authentication system and get an auth-token (without any
permission though). Moreover, if we create a s
12 matches
Mail list logo