How bizarre, I haven’t dealt with this specific SKU before.  Some Dell / LSI 
HBAs call this passthrough mode, some “personality”, some “jbod mode”, dunno 
why they can’t be consistent.


> We are testing an experimental Ceph cluster with server and controller at
> subject.
> 
> The controller have not an HBA mode, but only a 'NonRAID' mode, come sort of
> 'auto RAID0' configuration.

Dell’s CLI guide describes setting individual drives in Non-RAID, which 
*smells* like passthrough, not the more-complex RAID0 workaround we had to do 
before passthrough.

https://www.dell.com/support/manuals/en-nz/perc-h750-sas/perc_cli_rg/set-drive-state-commands?guid=guid-d4750845-1f57-434c-b4a9-935876ee1a8e&lang=en-us
> 
> We are using SSD SATA disks (MICRON MTFDDAK480TDT) that perform very well,
> and SAS HDD disks (SEAGATE ST8000NM014A) that instead perform very bad
> (particulary, very low IOPS).

Spinners are slow, this is news?

That said, how slow is slow?  Testing commands and results or it didn’t happen.

Also, firmware matters.  Run Dell’s DSU.

> There's some hint for disk/controller configuration/optimization?

Give us details, perccli /c0 show, test results etc.  

Use a different HBA if you have to use an HBA, one that doesn’t suffer an RoC.  
Better yet, take an expansive look at TCO and don’t write off NVMe as 
infeasible.  If your cluster is experimental hopefully you aren’t stuck with a 
lot of these.  Add up the cost of an RoC HBA, optionally with cache RAM and 
BBU/supercap, add in the cost delta for SAS HDDs over SATA.  Add in the 
operational hassle of managing WAL+DB on those boot SSDs.  Add in the extra 
HDDs you’ll need to provision because of IOPS. 

> 
> 
> Thanks.
> 
> -- 
>  Io credo nella chimica tanto quanto Giulio Cesare credeva nel caso...
>  mi va bene fino a quando non riguarda me :)  (Emanuele Pucciarelli)
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to