Il 05 gen 2017 6:33 PM, "Joe Julian" <j...@julianfamily.org> ha scritto:

That's still not without it's drawbacks, though I'm sure my instance is
pretty rare. Ceph's automatic migration of data caused a cascading failure
and a complete loss of 580Tb of data due to a hardware bug. If it had been
on gluster, none of it would have been lost.


I'm not talking only about automatic rebalance but mostly about ability to
add a single brick/server to a replica 3 volume

Anyway,  could you please share more details about the experience you had
wiith ceph and about what you mean with hardware bug?
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to