Yes I also tried it on ram disks and got basically the same results as with
nvmes :-) capacitors do matter though because while you get 1000 t1q1 iops with
them you get 100-200 iops without them. And it starts to resemble HDD at that
point :-)
>From my profiling experiments I think there's no
Of course, but Telegram is popular and has a lot of interesting features and it
used to fight censorship :)). Matrix is interesting too, but I suspect it
doesn't have that amount of users - we have 1650 members in our russian Ceph
group, that's definitely something
27 июля 2021 г. 11:39:31
Google groups is only like 2007. Use Telegram @ceph_users and/or @ceph_ru :-)
24 июля 2021 г. 0:27:21 GMT+03:00, y...@deepunix.net пишет:
>I feel like ceph is living in 2005. It's quite hard to find help on
>issues related to ceph and it's almost impossible to get involved into
>helping others.
It succeeds to load with LD_PRELOAD as, as I understand, block_register() gets
called. QAPI and new QAPI-based block device syntax don't work though because
they're based on IDLs built into QEMU... QAPI will require patching, yeah. It
would be nicer if QAPI supported plugins too... :-)
--
Yes
Not RBD, it has an own qemu driver
23 сентября 2020 г. 3:24:23 GMT+03:00, Lindsay Mathieson
пишет:
>On 23/09/2020 8:44 am, vita...@yourcmc.ru wrote:
>> There are more details in the README file which currently opens from
>the domainhttps://vitastor.io
>
>that redirects to
Easy, 883 has capacitors and 970 evo doesn't
13 сентября 2020 г. 0:57:43 GMT+03:00, Seena Fallah
пишет:
>Hi. How do you say 883DCT is faster than 970 EVO?
>I saw the specifications and 970 EVO has higher IOPS than 883DCT!
>Can you please tell why 970 EVO act lower than 883DCT?
By the way, DON'T USE rados bench. It's an incorrect benchmark. ONLY use fio
10 сентября 2020 г. 22:35:53 GMT+03:00, vita...@yourcmc.ru пишет:
>Hi George
>
>Author of Ceph_performance here! :)
>
>I suspect you're running tests with 1 PG. Every PG's requests are
>always serialized, that's why OSD
Hi Igor,
I think so because
1) space usage increases after each rebalance. Even when the same pg is moved
twice (!)
2) I use 4k min_alloc_size from the beginning
One crazy hypothesis is that maybe ceph allocates space for uncompressed
objects, then compresses them and leaks
WAL is 1G (you can allocate 2 to be sure), DB should always be 30G. And this
doesn't depend on the size of the data partition :-)
14 марта 2020 г. 22:50:37 GMT+03:00, Victor Hooi пишет:
>Hi,
>
>I'm building a 4-node Proxmox cluster, with Ceph for the VM disk
>storage.
>
>On each node, I have:
>
Hello Jesper,
Assuming your SSDs are server ones with capacitors I'd say the biggest impact
will come from CPUs, up to twice lesser latency, up to twice... or maybe 1.5x
more iops in parallel mode.
If your SSDs are desktop then new server NVMes will be a significant
improvement for writes,
I understand, but I still don't trust dd and I still insist that you retest it
with fio when you observe strange behaviour again :-) Either it's MAGIC or "one
of the turtles is bullshitting", then you just need to find that turtle.
--
With best regards,
Vitaliy
11 matches
Mail list logo