Ok thanks. Is the expectation that events will be available on that socket as 
soon as the occur or is it more of a best effort situation? I'm just trying to 
nail down which side of the socket might be lagging. It's pretty difficult to 
recreate this as I have to hit the cluster very hard to get it to start lagging.

Thanks, Aaron 

> On Apr 12, 2019, at 11:16 AM, Matt Benjamin <mbenj...@redhat.com> wrote:
> 
> Hi Aaron,
> 
> I don't think that exists currently.
> 
> Matt
> 
> On Fri, Apr 12, 2019 at 11:12 AM Aaron Bassett
> <aaron.bass...@nantomics.com> wrote:
>> 
>> I have an radogw log centralizer that we use to for an audit trail for data 
>> access in our ceph clusters. We've enabled the ops log socket and added 
>> logging of the http_authorization header to it:
>> 
>> rgw log http headers = "http_authorization"
>> rgw ops log socket path = /var/run/ceph/rgw-ops.sock
>> rgw enable ops log = true
>> 
>> We have a daemon that listens on the ops socket, extracts/manipulates some 
>> information from the ops log, and sends it off to our log aggregator.
>> 
>> This setup works pretty well for the most part, except when the cluster 
>> comes under heavy load, it can get _very_ laggy - sometimes up to several 
>> hours behind. I'm having a hard time nailing down whats causing this lag. 
>> The daemon is rather naive, basically just some nc with jq in between, but 
>> the log aggregator has plenty of spare capacity, so I don't think its 
>> slowing down how fast the daemon is consuming from the socket.
>> 
>> I was revisiting the documentation about this ops log and noticed the 
>> following which I hadn't seen previously:
>> 
>> When specifying a UNIX domain socket, it is also possible to specify the 
>> maximum amount of memory that will be used to keep the data backlog:
>> rgw ops log data backlog = <size in bytes>
>> Any backlogged data in excess to the specified size will be lost, so the 
>> socket needs to be read constantly.
>> 
>> I'm wondering if theres a way I can query radosgw for the current size of 
>> that backlog to help me narrow down where the bottleneck may be occuring.
>> 
>> Thanks,
>> Aaron
>> 
>> 
>> 
>> CONFIDENTIALITY NOTICE
>> This e-mail message and any attachments are only for the use of the intended 
>> recipient and may contain information that is privileged, confidential or 
>> exempt from disclosure under applicable law. If you are not the intended 
>> recipient, any disclosure, distribution or other use of this e-mail message 
>> or attachments is prohibited. If you have received this e-mail message in 
>> error, please delete and notify the sender immediately. Thank you.
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=DwIFaQ&c=Tpa2GKmmYSmpYS4baANxQwQYqA0vwGXwkJOPBegaiTs&r=5nKer5huNDFQXjYpOR4o_7t5CRI8wb5Vb_v1pBywbYw&m=sIK_aBR3PrR2olfXOZWgvPVm7jIoZtvEk2YHofl4TDU&s=FzFoCJ8qtZ66OKdL1Ph10qjZbCEjvMg9JyS_9LwEpSg&e=
>> 
>> 
> 
> 
> -- 
> 
> Matt Benjamin
> Red Hat, Inc.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
> 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.redhat.com_en_technologies_storage&d=DwIFaQ&c=Tpa2GKmmYSmpYS4baANxQwQYqA0vwGXwkJOPBegaiTs&r=5nKer5huNDFQXjYpOR4o_7t5CRI8wb5Vb_v1pBywbYw&m=sIK_aBR3PrR2olfXOZWgvPVm7jIoZtvEk2YHofl4TDU&s=hi6_HiZS0D_nzAqKsvJPPfmi8nZSv4lZCRFZ1ru9CxM&e=
> 
> tel.  734-821-5101
> fax.  734-769-8938
> cel.  734-216-5309


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to