On 28-1-2019 02:56, Will Dennis wrote:
I mean to use CephFS on this PoC; the initial use would be to back up an 
existing ZFS server with ~43TB data (may have to limit the backed-up data 
depending on how much capacity I can get out of the OSD servers) and then share 
out via NFS as a read-only copy, that would give me some I/O speeds on writes 
and reads, and allow me to test different aspects of Ceph before I go pitching 
it as a primary data storage technology (it will be our org's first foray into 
SDS, and I want it to succeed.)

No way I'd go primary production storage with this motley collection of 
"pre-loved" equipment :) If it all seems to work well, I think I could get a 
reasonable budget for new production-grade gear.

Perhaps superfluous, my 2ct anyways.    

I'd carefully define the term: "all seems to work well".

I'm running several ZFS instances of equal or bigger size, that are specifically tuned (buses, ssds, memory and ARC ) to their usage. And they usually do perform very well.

No if you define "work well" as performance close to what you get out of your zfs store.... be careful not to compare pears to lemons. You might need rather beefy HW to get to the ceph-cluster performance at the same level as your ZFS.

So you'd better define you PoC target with real expectations.

--WjW




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to