Thanks Eric.

Using your command for SET reported that the OSD may need a restart (which sets 
it back to default anyway)  but the below seems to work:
ceph tell osd.24 config set objecter_inflight_op_bytes 1073741824
ceph tell osd.24 config set objecter_inflight_ops 10240

reading back the settings looks right:
[root@ceph00 ~]# ceph daemon osd.24 config show | grep objecter
    "debug_objecter": "0/1",
    "objecter_completion_locks_per_session": "32",
    "objecter_debug_inject_relock_delay": "false",
    "objecter_inflight_op_bytes": "1073741824",
    "objecter_inflight_ops": "10240",
    "objecter_inject_no_watch_ping": "false",
    "objecter_retry_writes_after_first_reply": "false",
    "objecter_tick_interval": "5.000000",
    "objecter_timeout": "10.000000",
    "osd_objecter_finishers": "1",


Ive done that for the three OSDs that are in the cache tier.  But the 
performance is unchanged - the writes still spill over to the HDD pool.

Still, your idea sounds close - it does feel like something in the cache tier 
is hitting a limit.

Regards,
Steve

-----Original Message-----
From: Eric Smith <eric.sm...@vecima.com> 
Sent: Monday, 11 May 2020 9:11 PM
To: Steve Hughes <ste...@scalar.com.au>; ceph-users@ceph.io
Subject: RE: [ceph-users] Re: Write Caching to hot tier not working as expected

Reading and setting them should be pretty easy:

READ (Run from the host where OSD <id> is hosted):
ceph daemon osd.<id> config show | grep objecter

SET (Assuming these can be set in memory):
ceph tell osd.<id> injectargs "--objecter-inflight-op-bytes=1073741824" (Change 
to 1GB/sec throttle)

To persist these you should add them to the ceph.conf (I'm not sure what 
section though - you might have to test this).
And yes - the information is sketchy I agree - I don't really have any input 
here.

That's the best I can do for now 😊
Eric

-----Original Message-----
From: Steve Hughes <ste...@scalar.com.au> 
Sent: Monday, May 11, 2020 6:44 AM
To: Eric Smith <eric.sm...@vecima.com>; ceph-users@ceph.io
Subject: RE: [ceph-users] Re: Write Caching to hot tier not working as expected

Thank you Eric.  That 'sounds like' exactly my issue.  Though I'm surprised to 
bump into something like that on such a small system and at such low bandwidth.

But the information I can find on those parameters is sketchy to say the least.

Can you point me at some doco that explains what they do,  how to read the 
current values and how to set them?

Cheers,
Steve

-----Original Message-----
From: Eric Smith <eric.sm...@vecima.com> 
Sent: Monday, 11 May 2020 8:00 PM
To: Steve Hughes <ste...@scalar.com.au>; ceph-users@ceph.io
Subject: RE: [ceph-users] Re: Write Caching to hot tier not working as expected

It sounds like you might be bumping up against the default 
objecter_inflight_ops (1024)  and/or objecter_inflight_op_bytes (100MB). 

-----Original Message-----
From: ste...@scalar.com.au <ste...@scalar.com.au>
Sent: Monday, May 11, 2020 5:48 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Write Caching to hot tier not working as expected

Interestingly,  I have found that if I limit the rate at which data is written 
the tiering behaves as expected.

I'm using a robocopy job from a Windows VM to copy large files from my existing 
storage array to a test Ceph volume.  By using the /IPG parameter I can roughly 
control the rate at which data is written.

I've found that if I limit the write rate to around 30MBytes/sec the data all 
goes to the hot tier, zero data goes to the HDD tier, and the observed write 
latency is about 5msec.   If I go any higher than this I see data being written 
to the HDDs and the observed write latency goes way up.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io


--

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to