Thank you Eric.  That 'sounds like' exactly my issue.  Though I'm surprised to 
bump into something like that on such a small system and at such low bandwidth.

But the information I can find on those parameters is sketchy to say the least.

Can you point me at some doco that explains what they do,  how to read the 
current values and how to set them?

Cheers,
Steve

-----Original Message-----
From: Eric Smith <eric.sm...@vecima.com> 
Sent: Monday, 11 May 2020 8:00 PM
To: Steve Hughes <ste...@scalar.com.au>; ceph-users@ceph.io
Subject: RE: [ceph-users] Re: Write Caching to hot tier not working as expected

It sounds like you might be bumping up against the default 
objecter_inflight_ops (1024)  and/or objecter_inflight_op_bytes (100MB). 

-----Original Message-----
From: ste...@scalar.com.au <ste...@scalar.com.au>
Sent: Monday, May 11, 2020 5:48 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Write Caching to hot tier not working as expected

Interestingly,  I have found that if I limit the rate at which data is written 
the tiering behaves as expected.

I'm using a robocopy job from a Windows VM to copy large files from my existing 
storage array to a test Ceph volume.  By using the /IPG parameter I can roughly 
control the rate at which data is written.

I've found that if I limit the write rate to around 30MBytes/sec the data all 
goes to the hot tier, zero data goes to the HDD tier, and the observed write 
latency is about 5msec.   If I go any higher than this I see data being written 
to the HDDs and the observed write latency goes way up.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io


--
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to