Hi,
[Sorry I missed the body of your questions, here is my answer ;-]

On 11/05/2015 23:13, Somnath Roy wrote:> Summary :
> 
> -------------
> 
>  
> 
> 1. It is doing pretty good in Reads and 4 Rados Bench clients are saturating 
> 40 GB network. With more physical server, it is scaling almost linearly and 
> saturating 40 GbE on both the host.
> 
>  
> 
> 2. As suspected with Ceph, problem is again with writes. Throughput wise it 
> is beating replicated pools in significant numbers. But, it is not scaling 
> with multiple clients and not saturating anything.
> 
>  
> 
>  
> 
> So, my question is the following.
> 
>  
> 
> 1. Probably, nothing to do with EC backend, we are suffering because of 
> filestore inefficiencies. Do you think any tunable like EC stipe size (or 
> anything else) will help here ?

I think Mark Nelson would be in a better position that me to answer as he has 
conducted many experiments with erasure coded pools.


> 2. I couldn’t make fault domain as ‘host’, because of HW limitation. Do you 
> think will that play a role in performance for bigger k values ?

I don't see a reason why there would be a direct relationship between the 
failure domain and the values of k. Do you have a specific example in mind ?

> 3. Even though it is not saturating 40 GbE for writes, do you think 
> separating out public/private network will help in terms of performance ?

I don't think so. What is the bottleneck ? CPU or disk I/O ?

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre


Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to