What did you tune? Did you have to make a human sacrifice? :) Which release?
The last proper benchmark numbers I saw were from hammer and the latencies were 
basically still the same, about 2ms for write.

Jan


> On 10 Sep 2015, at 16:38, Haomai Wang <haomaiw...@gmail.com> wrote:
> 
> 
> 
> On Thu, Sep 10, 2015 at 10:36 PM, Jan Schermer <j...@schermer.cz 
> <mailto:j...@schermer.cz>> wrote:
> 
>> On 10 Sep 2015, at 16:26, Haomai Wang <haomaiw...@gmail.com 
>> <mailto:haomaiw...@gmail.com>> wrote:
>> 
>> Actually we can reach 700us per 4k write IO for single io depth(2 copy, 
>> E52650, 10Gib, intel s3700). So I think 400 read iops shouldn't be a 
>> unbridgeable problem.
>> 
> 
> Flushed to disk?
> 
> of course
>  
> 
> 
>> CPU is critical for ssd backend, so what's your cpu model?
>> 
>> On Thu, Sep 10, 2015 at 9:48 PM, Jan Schermer <j...@schermer.cz 
>> <mailto:j...@schermer.cz>> wrote:
>> It's certainly not a problem with DRBD (yeah, it's something completely 
>> different but it's used for all kinds of workloads including things like 
>> replicated tablespaces for databases).
>> It won't be a problem with VSAN (again, a bit different, but most people 
>> just want something like that)
>> It surely won't be a problem with e.g. ScaleIO which should be comparable to 
>> Ceph.
>> 
>> Latency on the network can be very low (0.05ms on my 10GbE). Latency on good 
>> SSDs is  2 orders of magnitute lower (as low as 0.00005 ms). Linux is pretty 
>> good nowadays at waking up threads and pushing the work. Multiply those 
>> numbers by whatever factor and it's still just a fraction of the 0.5ms 
>> needed.
>> The problem is quite frankly slow OSD code and the only solution now is to 
>> keep the data closer to the VM.
>> 
>> Jan
>> 
>> > On 10 Sep 2015, at 15:38, Gregory Farnum <gfar...@redhat.com 
>> > <mailto:gfar...@redhat.com>> wrote:
>> >
>> > On Thu, Sep 10, 2015 at 2:34 PM, Stefan Priebe - Profihost AG
>> > <s.pri...@profihost.ag <mailto:s.pri...@profihost.ag>> wrote:
>> >> Hi,
>> >>
>> >> while we're happy running ceph firefly in production and also reach
>> >> enough 4k read iop/s for multithreaded apps (around 23 000) with qemu 
>> >> 2.2.1.
>> >>
>> >> We've now a customer having a single threaded application needing around
>> >> 2000 iop/s but we don't go above 600 iop/s in this case.
>> >>
>> >> Any tuning hints for this case?
>> >
>> > If the application really wants 2000 sync IOPS to disk without any
>> > parallelism, I don't think any network storage system is likely to
>> > satisfy him — that's only half a millisecond per IO. 600 IOPS is about
>> > the limit of what the OSD can do right now (in terms of per-op
>> > speeds), and although there is some work being done to improve that
>> > it's not going to be in a released codebase for a while.
>> >
>> > Or perhaps I misunderstood the question?
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>> 
>> 
>> 
>> -- 
>> Best Regards,
>> 
>> Wheat
>> 
> 
> 
> 
> 
> -- 
> Best Regards,
> 
> Wheat
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to