Re: [ceph-users] librmb: Mail storage on RADOS with Dovecot

2017-09-25 Thread Marc Roos
>From the looks of it, to bad the efforts could not be combined/coordinated, that seems to be an issue with many open source initiatives. -Original Message- From: mj [mailto:li...@merit.unu.edu] Sent: zondag 24 september 2017 16:37 To: ceph-users@lists.ceph.com Subject: Re: [ceph-use

Re: [ceph-users] librmb: Mail storage on RADOS with Dovecot

2017-09-25 Thread Wido den Hollander
> Op 22 september 2017 om 23:56 schreef Gregory Farnum : > > > On Fri, Sep 22, 2017 at 2:49 PM, Danny Al-Gaaf > wrote: > > Am 22.09.2017 um 22:59 schrieb Gregory Farnum: > > [..] > >> This is super cool! Is there anything written down that explains this > >> for Ceph developers who aren't fami

Re: [ceph-users] librmb: Mail storage on RADOS with Dovecot

2017-09-25 Thread Danny Al-Gaaf
Am 25.09.2017 um 09:00 schrieb Marc Roos: > >>From the looks of it, to bad the efforts could not be > combined/coordinated, that seems to be an issue with many open source > initiatives. That's not right. The plan is to contribute the librmb code to the Ceph project and the Dovecot part back t

[ceph-users] Updating ceps client - what will happen to services like NFS on clients

2017-09-25 Thread Götz Reinicke
Hi, I updated our ceph OSD/MON Nodes from 10.2.7 to 10.2.9 and everything looks good so far. Now I was wondering (as I may have forgotten how this works) what will happen to a NFS server which has the nfs shares on a ceph rbd ? Will the update interrupt any access to the NFS share or is it th

Re: [ceph-users] librmb: Mail storage on RADOS with Dovecot

2017-09-25 Thread Marc Roos
But from the looks of this dovecot mailinglist post, you didn’t start your project with talking to the dovecot guys, or have an ongoing communication with them during the development. I would think with that their experience could be a valuable asset. I am not talking about just giving some

Re: [ceph-users] erasure code profile

2017-09-25 Thread Vincent Godin
If you have at least 2 hosts per room, you can use a k=3 and m=3 and place 2 shards per room (one on each host). So you'll need 3 shards to read the data : you can loose a room and one host in the two other rooms and still get your data.It covers a double faults which is better. It will take more s

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-25 Thread TYLin
Hi, To my understand, the bluestore write workflow is For normal big write 1. Write data to block 2. Update metadata to rocksdb 3. Rocksdb write to memory and block.wal 4. Once reach threshold, flush entries in block.wal to block.db For overwrite and small write 1. Write data and metadata to ro

Re: [ceph-users] [RGW] SignatureDoesNotMatch using curl

2017-09-25 Thread Дмитрий Глушенок
You must use triple "\n" with GET in stringToSign. See http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html > 18 сент. 2017 г., в 12:23, junho_k...@tmax.co.kr > написал(а): > > I’m trying to use Ceph Object Storage in CLI. > I used curl to make a request to the RGW with S3 wa

Re: [ceph-users] Ceph release cadence

2017-09-25 Thread Joao Eduardo Luis
I am happy with this branch of the thread! I'm guessing this would start post-Mimic though, if no one objects and if we want to target a March release? -Joao On 09/23/2017 02:58 AM, Sage Weil wrote: On Fri, 22 Sep 2017, Gregory Farnum wrote: On Fri, Sep 22, 2017 at 3:28 PM, Sage Weil wro

Re: [ceph-users] librmb: Mail storage on RADOS with Dovecot

2017-09-25 Thread Danny Al-Gaaf
Am 25.09.2017 um 10:00 schrieb Marc Roos: > > > But from the looks of this dovecot mailinglist post, you didn’t start > your project with talking to the dovecot guys, or have an ongoing > communication with them during the development. I would think with that > their experience could be a val

[ceph-users] CephFS Luminous | MDS frequent "replicating dir" message in log

2017-09-25 Thread David
Hi All Since upgrading a cluster from Jewel to Luminous I'm seeing a lot of the following line in my ceph-mds log (path name changed by me - the messages refer to different dirs) 2017-09-25 12:47:23.073525 7f06df730700 0 mds.0.bal replicating dir [dir 0x1003e5b /path/to/dir/ [2,head] auth v=

Re: [ceph-users] Updating ceps client - what will happen to services like NFS on clients

2017-09-25 Thread David
Hi Götz If you did a rolling upgrade, RBD clients shouldn't have experienced interrupted IO and therefor IO to NFS exports shouldn't have been affected. However, in the past when using kernel NFS over kernel RBD, I did have some lockups when OSDs went down in the cluster so that's something to wat

Re: [ceph-users] RBD features(kernel client) with kernel version

2017-09-25 Thread Ilya Dryomov
On Sat, Sep 23, 2017 at 12:07 AM, Muminul Islam Russell wrote: > Hi Ilya, > > Hope you are doing great. > Sorry for bugging you. I did not find enough resources for my question. I > would be really helped if you could reply me. My questions are in red > colour. > > - layering: layering suppo

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-25 Thread Mark Nelson
On 09/25/2017 03:31 AM, TYLin wrote: Hi, To my understand, the bluestore write workflow is For normal big write 1. Write data to block 2. Update metadata to rocksdb 3. Rocksdb write to memory and block.wal 4. Once reach threshold, flush entries in block.wal to block.db For overwrite and small

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-25 Thread Dietmar Rieder
On 09/25/2017 02:59 PM, Mark Nelson wrote: > On 09/25/2017 03:31 AM, TYLin wrote: >> Hi, >> >> To my understand, the bluestore write workflow is >> >> For normal big write >> 1. Write data to block >> 2. Update metadata to rocksdb >> 3. Rocksdb write to memory and block.wal >> 4. Once reach thresho

Re: [ceph-users] Updating ceps client - what will happen to services like NFS on clients

2017-09-25 Thread David Turner
It depends a bit on how you have the RBDs mapped. If you're mapping them using krbd, then they don't need to be updated to use the new rbd-fuse or rbd-nbd code. If you're using one of the latter, then you should schedule a time to restart the mounts so that they're mapped with the new Ceph versio

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-25 Thread David Turner
db/wal partitions are per OSD. DB partitions need to be made as big as you need them. If they run out of space, they will fall back to the block device. If the DB and block are on the same device, then there's no reason to partition them and figure out the best size. If they are on separate dev

Re: [ceph-users] can't figure out why I have HEALTH_WARN in luminous

2017-09-25 Thread Michael Kuriger
Thanks!! I did see that warning, but it never occurred to me I need to disable it. Mike Kuriger Sr. Unix Systems Engineer T: 818-649-7235 M: 818-434-6195 On 9/23/17, 5:52 AM, "John Spray" wrote: On Fri, Sep 22, 2017 at 6:48 PM, Michael Kuriger wrote: >

Re: [ceph-users] CephFS Luminous | MDS frequent "replicating dir" message in log

2017-09-25 Thread Gregory Farnum
This is supposed to indicate that the directory is hot and being replicated to another active MDS to spread the load. But skimming the code it looks like maybe there's a bug and this is not blocked on the multiple-active stuff it's supposed to be. (Though I don't anticipate any issues for you.) Pa

[ceph-users] question regarding filestore on Luminous

2017-09-25 Thread Alan Johnson
I am trying to compare FileStore performance against Bluestore. With Luminous 12.20, Bluestore is working fine but if I try and create a Filestore volume with a separate journal using Jewel like Syntax - "ceph-deploy osd create :sdb:nvme0n1", device nvme0n1 is ignored and it sets up two partit

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-25 Thread Nigel Williams
On 26 September 2017 at 01:10, David Turner wrote: > If they are on separate > devices, then you need to make it as big as you need to to ensure that it > won't spill over (or if it does that you're ok with the degraded performance > while the db partition is full). I haven't come across an equat

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-25 Thread Mark Nelson
On 09/25/2017 05:02 PM, Nigel Williams wrote: On 26 September 2017 at 01:10, David Turner wrote: If they are on separate devices, then you need to make it as big as you need to to ensure that it won't spill over (or if it does that you're ok with the degraded performance while the db partitio

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-25 Thread Nigel Williams
On 26 September 2017 at 08:11, Mark Nelson wrote: > The WAL should never grow larger than the size of the buffers you've > specified. It's the DB that can grow and is difficult to estimate both > because different workloads will cause different numbers of extents and > objects, but also because r

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-25 Thread Sage Weil
On Tue, 26 Sep 2017, Nigel Williams wrote: > On 26 September 2017 at 08:11, Mark Nelson wrote: > > The WAL should never grow larger than the size of the buffers you've > > specified. It's the DB that can grow and is difficult to estimate both > > because different workloads will cause different n

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-25 Thread Dietmar Rieder
thanks David, that's confirming what I was assuming. To bad that there is no estimate/method to calculate the db partition size. Dietmar On 09/25/2017 05:10 PM, David Turner wrote: > db/wal partitions are per OSD.  DB partitions need to be made as big as > you need them.  If they run out of spac