[ceph-users] MDS consuming large memory and rebooting

2019-07-07 Thread Robert Ruge
Hi All. I came in this morning to find that one of my cephfs file systems was read only and that the MDS was replaying the log but the MDS processes kept crashing with out of memory. I have had to increase the memory on the VM's hosting the mds and the mds process now gets to ~76GB before it

Re: [ceph-users] ceph-volume failed after replacing disk

2019-07-07 Thread ST Wong (ITSC)
Thanks for all your help. I’m just curious if I can re-use the same ID after disk crush since it seems I can do that according to the manual.It’s totally okay to use other ID ☺ Finally recreated the OSD without specifying OSD ID – it takes ID 71 again. Thanks again. Best Rgds, /st Wong

[ceph-users] What's the best practice for Erasure Coding

2019-07-07 Thread David
Hi Ceph-Users, I'm working with a  Ceph cluster (about 50TB, 28 OSDs, all Bluestore on lvm). Recently, I'm trying to use the Erasure Code pool. My question is "what's the best practice for using EC pools ?". More specifically, which plugin (jerasure, isa, lrc, shec or  clay) should I

Re: [ceph-users] Ubuntu 19.04

2019-07-07 Thread Kai Stian Olstad
On 06.07.2019 16:43, Ashley Merrick wrote: > Looking at the possibility of upgrading my personal storage cluster from > Ubuntu 18.04 -> 19.04 to benefit from a newer version of the Kernel e.t.c For a newer kernel install HWE[1], at the moment you will get the 18.10 kernel, but in August it will

Re: [ceph-users] Debian Buster builds

2019-07-07 Thread Martin Verges
Hello, you still need to use other mirrors as debian buster still only provides 12.2.11 packages (https://packages.debian.org/buster/ceph) We from croit.io maintain (unofficial) Nautilus builds for Buster here: https://mirror.croit.io/debian-nautilus/ (signed with

Re: [ceph-users] Ceph performance IOPS

2019-07-07 Thread Christian Wuerdig
One thing to keep in mind is that the blockdb/wal becomes a Single Point Of Failure for all OSDs using it. So if that SSD dies essentially you have to consider all OSDs using it as lost. I think most go with something like 4-8 OSDs per blockdb/wal drive but it really depends how risk-averse you

Re: [ceph-users] Debian Buster builds

2019-07-07 Thread Thore Krüss
Good evening, since Buster is now officially stable, what's the way to proceed to get packages for mimic and nautilus? Best regards Thore On Tue, Jun 18, 2019 at 05:11:25PM +0200, Tobias Gall wrote: > Hello, > > I would like to switch to debian buster and test the upgrade from luminous > but

Re: [ceph-users] Ubuntu 19.04

2019-07-07 Thread John Hearns
You can compile from source :-) I Can't comment on the compatibility of the packages between 18.04 and 19.04, sorry. On Sat, 6 Jul 2019 at 15:44, Ashley Merrick wrote: > Hello, > > Looking at the possibility of upgrading my personal storage cluster from > Ubuntu 18.04 -> 19.04 to benefit from a