> Personally I am not worried too much about the hypervisor - hypervisor 
> traffic as I am using a dedicated infiniband network for storage.
> It is not used for the guest to guest or the internet traffic or anything 
> else. I would like to decrease or at least smooth out the traffic peaks 
> between the hypervisors and the SAS/SATA osd storage servers.
> I guess the ssd cache pool would enable me to do that as the eviction rate 
> should be more structured compared to the random io writes that guest vms 
> generate.
Sounds reasonable

>>I'm very interested in the effect of caching pools in combination with 
>>running VMs on them so I'd be happy to hear what you find ;)
> I will give it a try and share back the results when we get the ssd kit.
Excellent, looking forward to it.


>> As a side note: Running OSDs on hypervisors would not be my preferred choice 
>> since hypervisor load might impact Ceph performance.
> Do you think it is not a good idea even if you have a lot of cores on the 
> hypervisors?
> Like 24 or 32 per host server?
> According to my monitoring, our osd servers are not that stressed and 
> generally have over 50% of free cpu power.

The number of cores do not really matter if they are all busy ;)
I honestly do not know how Ceph behaves when it is CPU starved but I guess it 
might not be pretty.
Since your whole environment will be crumbling down if your storage becomes 
unavailable it is not a risk I would take lightly.

Cheers,
Robert van Leeuwen




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to