> -----Original Message-----
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
> m13913886...@yahoo.com
> Sent: 20 July 2016 02:09
> To: Christian Balzer <ch...@gol.com>; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2
> 
> But the 0.94 version works fine(In fact, IO was greatly improved).

How are you measuring this, is this just through micro benchmarks or for 
something more realistic running over a number of hours?

> This problem occurs only in version 10.x.
> Like you said that the IO was going to the cold storage mostly .  And IO is 
> going slowly.
> what can I do to improve IO performance of cache tiering in version 10.x ?
> How does cache tiering works in version 10.x ?
> is it a bug? Or configure are very different 0.94 version ?
> Too few  information in Official website about this.

There is a number of differences, but should all have a positive effect on real 
life workloads. It's important to focus more on the word tiering rather than 
caching. You don't want to be continually shifting large amounts of data to and 
from the cache, but only the really hot bits.

The main changes between the two versions would be in the inclusion of proxy 
writes, promotion throttling and recency fixes. All will reduce the amount of 
data that gets promoted.

But please let me know how you are testing.

> 
> 
> 
> On Tuesday, July 19, 2016 9:25 PM, Christian Balzer <mailto:ch...@gol.com> 
> wrote:
> 
> 
> Hello,
> 
> On Tue, 19 Jul 2016 12:24:01 +0200 Oliver Dzombic wrote:
> 
> > Hi,
> >
> > i have in my ceph.conf under [OSD] Section:
> >
> > osd_tier_promote_max_bytes_sec = 1610612736
> > osd_tier_promote_max_objects_sec = 20000
> >
> > #ceph --show-config is showing:
> >
> > osd_tier_promote_max_objects_sec = 5242880
> > osd_tier_promote_max_bytes_sec = 25
> >
> > But in fact its working. Maybe some Bug in showing the correct value.
> >
> > I had Problems too, that the IO was going to the cold storage mostly.
> >
> > After i changed this values ( and restarted >every< node inside the
> > cluster ) the problem was gone.
> >
> > So i assume, that its simply showing the wrong values if you call the
> > show-config. Or there is some other miracle going on.
> >
> > I just checked:
> >
> > #ceph --show-config | grep osd_tier
> >
> > shows:
> >
> > osd_tier_default_cache_hit_set_count = 4
> > osd_tier_default_cache_hit_set_period = 1200
> >
> > while
> >
> > #ceph osd pool get ssd_cache hit_set_count
> > #ceph osd pool get ssd_cache hit_set_period
> >
> > show
> >
> > hit_set_count: 1
> > hit_set_period: 120
> >
> Apples and oranges.
> 
> Your first query is about the config (and thus default, as it says in the
> output) options, the second one is for a specific pool.
> 
> There might be still any sorts of breakage with show-config and having to
> restart OSDs to have changes take effect is inelegant at least, but the
> above is not a bug.
> 
> Christian
> 
> >
> > So you can obviously ignore the ceph --show-config command. Its simply
> > not working correctly.
> >
> >
> 
> 
> --
> Christian Balzer        Network/Systems Engineer
> mailto:ch...@gol.com      Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
> 
> _______________________________________________
> ceph-users mailing list
> mailto:ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to