Hello Gregory,
Thanks for your input.
* Ceph may not have the performance ceiling you're looking for. A
> write IO takes about half a millisecond of CPU time, which used to be
> very fast and is now pretty slow compared to an NVMe device. Crimson
> will reduce this but is not ready for real users
There's a lot going on here. Some things I noticed you should be aware
of in relation to the tests you performed:
* Ceph may not have the performance ceiling you're looking for. A
write IO takes about half a millisecond of CPU time, which used to be
very fast and is now pretty slow compared to an
>
>
> SDS is not just about performance. You want something reliable for
> the next 10(?) years, the more data you have the more this is going to be an
> issue. For me it is important that organisations like CERN and NASA are using
> it. If you look at this incident with the 'bug of the yea
Hey,
SDS is not just about performance. You want something reliable for the next
> 10(?) years, the more data you have the more this is going to be an issue.
> For me it is important that organisations like CERN and NASA are using it.
> If you look at this incident with the 'bug of the year' then
> Most people just accept the way software works and take bad performance
> for
> granted. Most excuses went on "cheap hardware", which I just eliminated by
> using ram disks.
SDS is not just about performance. You want something reliable for the next
10(?) years, the more data you have the more
Yes I also tried it on ram disks and got basically the same results as with
nvmes :-) capacitors do matter though because while you get 1000 t1q1 iops with
them you get 100-200 iops without them. And it starts to resemble HDD at that
point :-)
>From my profiling experiments I think there's no w
Hey Vitalif,
I found your wiki as well as your own software before. Pretty impressive
and I love your work!
I especially like your "Theoretical Maximum Random Access Performance"
-Section.
That is exactly what I would expect about cephs performance as well (which
is by design very close to your vi
Hi, yes, it has software bottlenecks :-)
https://yourcmc.ru/wiki/Ceph_performance
If you just need block storage - try Vitastor https://vitastor.io/
https://yourcmc.ru/git/vitalif/vitastor/src/branch/master/README.md - I made it
very architecturally similar to Ceph - or if you're fine with even
Hello Marc,
I think you misread this. If you look at the illustration it is quite
> clear, going from 3x100.000 iops to 500 with ceph. That should be a
> 'warning'.
In my case it's dropping from 5.000.000 to ~5.000/server.
In this case I could use SD-Cards for my ceph cluster. The bottleneck is
Hi Sascha,
> Thanks for your response. Wrote this email early in the morning, spending
> the whole night and the last two weeks on benchmarking ceph.
Yes it is really bad this is not advertised, lots of people waste time on this.
> Most blog entries, forum researchs and tutorialy complaining in
Hello Marc,
Thanks for your response. Wrote this email early in the morning, spending
the whole night and the last two weeks on benchmarking ceph.
The main reason im spending days on it, is that i have poor performance
with about 25 nvme disks and i went a long long road with hunderts of
benchmar
>
> The benchmark was monitored by using this tool here:
> https://github.com/ceph/ceph/blob/master/src/tools/histogram_dump.py also
> by looking at the raw data of "ceph daemon osd.7 perf dump".
Why are you testing with one osd? You do not need ceph if you are only having
one disk.
You hav
12 matches
Mail list logo