[ceph-users] Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?

2022-02-09 Thread sascha a.
Hello, all your pools running replica > 1? also having 4 monitors is pretty bad for split brain situations.. Zach Heise (SSCC) schrieb am Mi., 9. Feb. 2022, 22:02: > Hello, > > ceph health detail says my 5-node cluster is healthy, yet when I ran > ceph orch upgrade start --ceph-version 16.2.7 e

[ceph-users] Re: osd crash when using rdma

2022-02-07 Thread sascha a.
Hey Marc, Some more information went to the "Ceph Performance very bad even in Memory?!" topic. Greetings On Mon, Feb 7, 2022 at 11:48 AM Marc wrote: > > > > > I gave up on this topic.. ceph does not properly support it. Even though > > it > > seems really promising. > > > > Tested a ping on 4

[ceph-users] Re: osd crash when using rdma

2022-02-07 Thread sascha a.
rdma. > I'm also going to try rdma mode now, but haven't found any more info. > > sascha a. 于2022年2月1日周二 20:31写道: > >> Hey, >> >> I Recently found this RDMA feature of ceph. Which I'm currently trying >> out. >> >> #rdma dev >> 0: m

[ceph-users] osd crash when using rdma

2022-02-01 Thread sascha a.
Hey, I Recently found this RDMA feature of ceph. Which I'm currently trying out. #rdma dev 0: mlx4_0: node_type ca fw 2.42.5000 node_guid 0010:e000:0189:1984 sys_image_guid 0010:e000:0189:1987 rdma_server and rdma_ping works as well as "udaddy". Stopped one of my osds, added following lines to

[ceph-users] Re: Ceph Performance very bad even in Memory?!

2022-01-31 Thread sascha a.
torage systems that can > go faster than Ceph, and there may be open-source ones — but most of > them don't provide the durability and consistency guarantees you'd > expect under a lot of failure scenarios. > -Greg > > > On Sat, Jan 29, 2022 at 8:42 PM sascha a. wr

[ceph-users] Re: Ceph Performance very bad even in Memory?!

2022-01-31 Thread sascha a.
Hey, SDS is not just about performance. You want something reliable for the next > 10(?) years, the more data you have the more this is going to be an issue. > For me it is important that organisations like CERN and NASA are using it. > If you look at this incident with the 'bug of the year' then

[ceph-users] Re: Ceph Performance very bad even in Memory?!

2022-01-30 Thread sascha a.
Hey Vitalif, I found your wiki as well as your own software before. Pretty impressive and I love your work! I especially like your "Theoretical Maximum Random Access Performance" -Section. That is exactly what I would expect about cephs performance as well (which is by design very close to your vi

[ceph-users] Re: Ceph Performance very bad even in Memory?!

2022-01-30 Thread sascha a.
Hello Marc, I think you misread this. If you look at the illustration it is quite > clear, going from 3x100.000 iops to 500 with ceph. That should be a > 'warning'. In my case it's dropping from 5.000.000 to ~5.000/server. In this case I could use SD-Cards for my ceph cluster. The bottleneck is

[ceph-users] Re: Ceph Performance very bad even in Memory?!

2022-01-30 Thread sascha a.
Hello Marc, Thanks for your response. Wrote this email early in the morning, spending the whole night and the last two weeks on benchmarking ceph. The main reason im spending days on it, is that i have poor performance with about 25 nvme disks and i went a long long road with hunderts of benchmar

[ceph-users] Ceph Performance very bad even in Memory?!

2022-01-29 Thread sascha a.
Hello, Im currently in progress of setting up a production ceph cluster on a 40 gbit network (for sure 40gb internal and public network). Did a lot of machine/linux tweeking already: - cpupower state disable - lowlatency kernel - kernel tweekings - rx buffer optimize - affinity mappings - correc