[ceph-users] Re: Smarter DB disk replacement

2021-09-15 Thread Ján Senko
M.2 was not designed for hot swap, and Icydock's solution is a bit outside specification. I really like the new Supermicro box (610P) that has 12 spinning disks and then 6 NVMs. 2 of them in 2.5"x7mm format and 4 of them in the new E1.S format. E1.S is practically next gen hot plug M.2 Ján

[ceph-users] Re: Smarter DB disk replacement

2021-09-13 Thread Reed Dier
I've been eyeing a similar icydock product (https://www.icydock.com/goods.php?id=309 ) for make M.2 drives more serviceable. While M.2 isn't ideal, if you have a 2U/4U box with a ton of available slots in the back, you could use these with some Micron

[ceph-users] Re: Smarter DB disk replacement

2021-09-09 Thread Mark Nelson
I don't think the bigger tier 1 enterprise vendors have really jumped on, but I've been curious to see if anyone would create a dense hotswap m.2 setup (possibly combined with traditional 3.5" HDD bays).  The only vendor I've really seen even attempt something like this is icydock:

[ceph-users] Re: Smarter DB disk replacement

2021-09-09 Thread David Orman
Exactly, we minimize the blast radius/data destruction by allocating more devices for DB/WAL of smaller size than less of larger size. We encountered this same issue on an earlier iteration of our hardware design. With rotational drives and NVMEs, we are now aiming for a 6:1 ratio based on our

[ceph-users] Re: Smarter DB disk replacement

2021-09-09 Thread Konstantin Shalygin
Ceph guarantee data consistency only when its write by Ceph When NVMe dies - we replace it and fill. Normal for our is a fill osd host for a two weeks k Sent from my iPhone > On 9 Sep 2021, at 17:10, Michal Strnad wrote: > > 2. When DB disk is not completely dead and has only relocated

[ceph-users] Re: Smarter DB disk replacement

2021-09-09 Thread Janne Johansson
Den tors 9 sep. 2021 kl 16:09 skrev Michal Strnad : > When the disk with DB died > it will cause inaccessibility of all depended OSDs (six or eight in our > environment), > How do you do it in your environment? Have two ssds for 8 OSDs, so only half go away when one ssd dies. -- May the most