We have tried to use simulator and design various scenarios for testing
dmclock,but the results are ideal when we test limit and proportion in some
scenarios ,
failed in reservation case .And we cannot ignore the truth ,the simulator
differs with ceph running environment .Dmclock proportion
Hi Eric,
We applied dmclock in ceph, but in actual environments only the limitation
worked while the reservation and proportion had no real effect at all.
After careful analysis, we find that the dmclock algrithm developed from the
mclock algrithm, has great defects in theory.
First
Thanks Max, yes the location hook is ideal way. But as I have few NVME per node
I ended up using ceph.conf to add them to correct location.
--
Deepak
On Jul 1, 2017, at 11:52 AM, Maxime Guyot
> wrote:
Hi Deepak,
As Wildo pointed it out in the
Hi Deepak,
As Wildo pointed it out in the thread you linked, "osd crush update on
start" and osd crush location are quick ways to fix this. If you are doing
custom locations (like for tiering NVMe vs HDD) "osd crush location hook"
(Doc:
> Op 1 juli 2017 om 1:04 schreef Tu Holmes :
>
>
> I would use the calculator at ceph and just set for "all in one".
>
> http://ceph.com/pgcalc/
>
I wouldn't do that. With CephFS the data pool(s) will contain much more objects
and data then the metadata pool.
You can
On Sat, Jul 1, 2017 at 9:29 AM, Nick Fisk wrote:
>> -Original Message-
>> From: Ilya Dryomov [mailto:idryo...@gmail.com]
>> Sent: 30 June 2017 14:06
>> To: Nick Fisk
>> Cc: Ceph Users
>> Subject: Re: [ceph-users] Kernel
> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: 30 June 2017 14:06
> To: Nick Fisk
> Cc: Ceph Users
> Subject: Re: [ceph-users] Kernel mounted RBD's hanging
>
> On Fri, Jun 30, 2017 at 2:14 PM, Nick Fisk