Stracing the radosgw-process, I see a lot of the following:


[pid 12364] sendmsg(23, {msg_name(0)=NULL, 
msg_iov(5)=[{"\7{\340\r\0\0\0\0\0P\200\16\0\0\0\0\0*\0?\0\10\0\331\0\0\0\0\0\0\0M"...,
 54}, 
{"\1\1\22\0\0\0\1\10\0\0\0\0\0\0\0\0\0\0\0\377\377\377\377\377\20\226\206\351\v3\0\0"...,
 217}, {"rgwuser_usage_log_read", 22}, 
{"\1\0011\0\0\0\0000\320Y\0\0\0\0\200\16\371Y\0\0\0\0\25\0\0\0DB0339"..., 55}, 
{"\305\234\203\332\0\0\0\0K~\356z\4\266\305\272\27hTx\5", 21}], 
msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 369

Does anybody know where this is coming from?


Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



 Van:   Mark Schouten <m...@tuxis.nl> 
 Aan:   <ceph-users@lists.ceph.com> 
 Verzonden:   24-10-2017 12:11 
 Onderwerp:   [ceph-users] Lots of reads on default.rgw.usage pool 

Hi,


Since I upgraded to Luminous last week, I see a lot of read-activity on the 
default.rgw.usage pool. (See attached image). I think it has something to with 
the rgw-daemons, since restarting them slows the reads down for a while. It 
might also have to do with tenants and the fact that dynamic bucket sharding 
isn't working for me [1].


So this morning I disabled the dynamic bucket sharding via 
'rgw_dynamic_resharding = false', but that doesn't seem to help. Maybe 
bucketsharding is still trying to run because of the entry in 'radosgw-admin 
reshard list' that I cannot delete?


[1]: 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021774.html


Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl

_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

Attachment: smime.p7s
Description: Electronic Signature S/MIME

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to