-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Most people run their clusters with no RAID for the data disks (some
will run RAID for the journals, but we don't). We use the scrub
mechanism to find data inconsistency and we use three copies to do
RAID over host/racks, etc. Unless you have a specific need, it is best
to forgo the Linux SW RAID or even HW RAIDs too with Ceph.
- ----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Mon, Nov 23, 2015 at 10:09 AM, Jose Tavares  wrote:
> Hi guys ...
>
> Is there any advantage in running CEPH over a Linux SW-RAID to avoid data
> corruption due to disk bad blocks?
>
> Can we just rely on the scrubbing feature of CEPH? Can we live without an
> underlying layer that avoids hardware problems to be passed to CEPH?
>
> I have a setup where I put one OSD per node and I have a 2 disk raid-1
> setup. Is it a good option or it would be better if I had 2 OSDs, one in
> each disk? If I had one OSD per disk, I would have to increase the number os
> replicas to guarantee enough replicas if one node goes down.
>
> Thanks a lot.
> Jose Tavares
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.2.3
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWU0qBCRDmVDuy+mK58QAAczAP/RducnXBNyeESCwUP/RC
3ELmoZxMO2ymrcQoutUVXfPTZk7f9pINUux4NRnglbVDxHasmNBHFKV3uWTS
OBmaVuC99cwG/ekhmNaW9qmQIZiP8byijoDln26eqarhhuMECgbYxZhLtB9M
A1W5gpKEvCBvYcjW9V/rwb0+V678Eo1IVlezwJ1TP3pxvRWpDsg1dIhOBit8
PznnPTMS46RGFrFirTg1AfvmipSI3rhLFdR2g7xHrQs9UHdmC0OQ/Jcjnln+
L0LNni7ht1lK80J9Mk4Q/nt7HfWCxJrg497Q+R0m+ab3qFJWBUGwofjbEnut
JroMLph0sxAzmDSst8a15pzTYaIqMqKkGfGeHgiaNzePwELAY2AKwgx2AIlf
iYJCtyiXRHnfQfQEi1TflWFuEaaAhKCPqRO7Duf6a+rEsSkvViaZ9Mtm1bSX
KnLLSz8ZtXI4wTWbImXbpdhuGgHvKsEGWlU+YDuCil9i+PedM67us1Y6TAsT
UWvCd8P385psITLI37Ly+YDHphjyeyYljCPGuom1e+/J3flElS/BgWUGUibB
rA3QUNUIPWKO6F37JEDja13BShTE9I17Y3EpSgGGG3jnTt93/E4dEvR6mC/F
qPPjs7EMvc99Xi7rTqtpm58JLGXWh3rMgjITJTwfLhGtCHgSvvrsRjmGB9Xa
anPK
=XQGP
-----END PGP SIGNATURE-----
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to