If using storcli/perccli for manipulating the LSI controller, you can disable 
the on-disk write cache with:
storcli /cx/vx set pdcache=off

You can also ensure that you turn off write caching at the controller level 
with 
storcli /cx/vx set iopolicy=direct
storcli /cx/vx set wrcache=wt

You can also tweak the readahead value for the vd if you want, though with an 
ssd, I don't think it will be much of an issue.
storcli /cx/vx set rdcache=nora

I'm sure the megacli alternatives are available with some quick searches.

May also want to check your c-states and p-states to make sure there isn't any 
aggressive power saving features getting in the way.

Reed

> On Aug 31, 2020, at 7:44 AM, VELARTIS Philipp Dürhammer 
> <p.duerham...@velartis.at> wrote:
> 
> We have older LSi Raid controller with no HBA/JBOD option. So we expose the 
> single disks as raid0 devices. Ceph should not be aware of cache status?
> But digging deeper in to it it seems that 1 out of 4 serves is performing a 
> lot better and has super low commit/applay rates while the other have a lot 
> mor (20+) on heavy writes. This just applys fore the ssd. For the hdds I cant 
> see a difference...
> 
> -----Ursprüngliche Nachricht-----
> Von: Frank Schilder <fr...@dtu.dk> 
> Gesendet: Montag, 31. August 2020 13:19
> An: VELARTIS Philipp Dürhammer <p.duerham...@velartis.at>; 
> 'ceph-users@ceph.io' <ceph-users@ceph.io>
> Betreff: Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra 
> journals)
> 
> Yes, they can - if volatile write cache is not disabled. There are many 
> threads on this, also recent. Search for "disable write cache" and/or 
> "disable volatile write cache".
> 
> You will also find different methods of doing this automatically.
> 
> Best regards,
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> 
> ________________________________________
> From: VELARTIS Philipp Dürhammer <p.duerham...@velartis.at>
> Sent: 31 August 2020 13:02:45
> To: 'ceph-users@ceph.io'
> Subject: [ceph-users] Can 16 server grade ssd's be slower then 60 hdds? (no 
> extra journals)
> 
> I have a productive 60 osd's cluster. No extra Journals. Its performing okay. 
> Now I added an extra ssd Pool with 16 Micron 5100 MAX. And the performance is 
> little slower or equal to the 60 hdd pool. 4K random as also sequential 
> reads. All on dedicated 2 times 10G Network. HDDS are still on filestore. SSD 
> on bluestore. Ceph Luminous.
> What should be possible 16 ssd's vs. 60 hhd's no extra journals?
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
> ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to