Sounds good...We can re-start the conversation, if it appears again. -Anil.
On Wed, Dec 27, 2017 at 7:24 AM, Vahram Aharonyan <[email protected]> wrote: > Hi Anil, > > > > Very sorry for very delayed response (2months…) – somehow my outlook > filters placed your mail in wrong folder… > > > > Actually we have not hit this issue again from then. We have multiple > environments forming Geode clusters but this was hit only in that setup and > once. Basically this is very loaded setup where client side has a lot of > data to send. And I guess here some infrastructural conditions should met > as well to have that situation, because after restart even this cluster was > not infected with the same issue again. > > And unfortunately, we don’t have full logs/stats for this setup from that > time. > > > > Answering to your questions inlined. > > > > And really thanks for your time and input on this case. I will re-start > this communication if issue pops up again and also will keep an eye on the > jira ticket you mentioned below > > > > Best Regards, > > Vahram. > > > > > > *From:* Anilkumar Gingade [mailto:[email protected]] > *Sent:* Tuesday, October 24, 2017 9:17 PM > *To:* [email protected] > *Subject:* Re: Client queue full > > > > Vahram, > > > > From your comments and log; it appears you have durable client which > registers interest for all keys on a partitioned region.... > > > > Can you provide more detail on your usecase and the point at which you > start seeing the problem...Like: > > - What type of region is on client side (proxy or caching-proxy) > > [Vahram] it’s a proxy region > > - What operation/actions performed on client side cache-listener > > [Vahram] On create we are getting key/value pair and adding it into some > internal data structure for further servicing. > > - Do you see the issue right immediately or after some-time...When this > happens, do you see the client side cache-listener getting invoked (may be > at slow rate); stat/log could help to see if its still connected. > > [Vahram] We have seen this issue only once and it lasts for a long time > until we restart the cluster. After restart issue was not reproduced. > Unfortunately we don’t have full logs/stats with us now… > > - The client cache is durable; do you expect it to disconnect and > reconnect often. > > [Vahram] Actually we don’t expect client re-enumerations to happen > frequently. > > - Have you tried increasing the queue size? > > [Vahram] nope > > - Is there any other clients encountering similar issue. > > [Vahram] Nope > > - What is the interest policy you are using; if you are not interested in > values, only about the create operation; you can ignore the value > > [Vahram] Actually values are important for us as well. > > > > -Anil. > > > > > > On Tue, Oct 24, 2017 at 9:07 AM, Michael Stolz <[email protected]> wrote: > > Maybe you shouldn't be using registerInterest at all if you don't have a > CacheListener. > > > > If all you want is to ensure that you always get the latest version of > data on a client get(key), just switch your client Region to PROXY instead > of CACHING_PROXY, and don't even bother registering interest. > > > > Interest registration without a CacheListener is very unusual. > > > > > -- > > Mike Stolz > > Principal Engineer, GemFire Product Lead > > Mobile: +1-631-835-4771 <(631)%20835-4771> > > > > On Tue, Oct 24, 2017 at 11:37 AM, Mangesh Deshmukh <[email protected]> > wrote: > > The only workaround (unless your case is different) for this is to restart > the client for which there is a queue build up. Not an elegant solution but > have to deal with it until we have some kind of fix. > > > > Thanks, > > Mangesh > > > > > > *From: *Vahram Aharonyan <[email protected]> > *Reply-To: *"[email protected]" <[email protected]> > *Date: *Tuesday, October 24, 2017 at 7:37 AM > *To: *"[email protected]" <[email protected]> > *Subject: *RE: Client queue full > > > > Hi Mangesh, Anil, > > > > Thank you for useful info – I will go through the ticket and also > heapdump/statistics locally to understand whether symptoms are the same or > not. > > Meanwhile could you please help me with following. Once this situation is > hit, it goes forever without recovering. What could help us to go out from > that? Is cluster or specific node restart only way to get rid of this? > > > > Thanks, > > Vahram. > > > > *From:* Anilkumar Gingade [mailto:[email protected]] > *Sent:* Monday, October 23, 2017 10:24 PM > *To:* [email protected] > *Subject:* Re: Client queue full > > > > Hi Vahram, > > > > The issue you are encountering and mangesh is seeing may be different...I > would encourage you to see the ticket created for Mangesh's issue and the > comments added, to see if they are same...Also the comments could help you > to understand/diagnose your issue: > > > > https://issues.apache.org/jira/browse/GEODE-3709 > <https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_GEODE-2D3709&d=DwMFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=wpTWSXVvcGFCkFEMePbOecdHHTbyiIj9aWq7oqKb0J8&m=zoGVUQTVlv3Hx2fRepad_fU7y0zAqgga8zDP4FJEiAg&s=e9RZ2otccA340_CHp7k5b-qwsV_IGfvTCWRMBZQQ49Y&e=> > > > > -Anil. > > > > > > > > > > > > On Mon, Oct 23, 2017 at 9:42 AM, Mangesh Deshmukh <[email protected]> > wrote: > > Hi Vahram, > > > > We are faced with similar issue resulting in the same kind of log > statements. I have another thread with subject "Subscription Queue Full”. > There is no resolution yet for that. > > > > Thanks, > > Mangesh > > > > > > *From: *Vahram Aharonyan <[email protected]> > *Reply-To: *"[email protected]" <[email protected]> > *Date: *Monday, October 23, 2017 at 6:33 AM > *To: *"[email protected]" <[email protected]> > *Subject: *Client queue full > > > > Hi, > > > > We have partitioned region and trying to invoke create(key, value) on > that. On the other side we have listener registered so once new entry is > created we should get notified. Instead of that we’re hitting this kind of > messages continuously: > > > > [warning 2017/10/23 05:23:38.145 PDT 31d0a20b-d81d-490b-b2ff-19645ed52387 > <Task Processor worker thread 4> tid=0x5c0] Client queue for > _gfe_non_durable_client_with_id_remote(74e9ba70-d7fc-47a1- > abbc-4d9066511049:20486:loner):44573:bbf05510: > 74e9ba70-d7fc-47a1-abbc-4d9066511049_2_queue > client is full. > > > > [info 2017/10/23 05:23:38.497 PDT 31d0a20b-d81d-490b-b2ff-19645ed52387 > <Task Processor worker thread 4> tid=0x5c0] Resuming with processing puts > ... > > > > [warning 2017/10/23 05:43:54.778 PDT 31d0a20b-d81d-490b-b2ff-19645ed52387 > <SavePropertiesCompletionHandler> tid=0x100] Client queue for > _gfe_non_durable_client_with_id_remote(74e9ba70-d7fc-47a1- > abbc-4d9066511049:20486:loner):44573:bbf05510: > 74e9ba70-d7fc-47a1-abbc-4d9066511049_2_queue > client is full. > > > > [info 2017/10/23 05:43:54.879 PDT 31d0a20b-d81d-490b-b2ff-19645ed52387 < > SavePropertiesCompletionHandler> tid=0x100] Resuming with processing puts > ... > > > > Could someone provide some information in which circumstances this queue > gets full and how it should be emptied? > > > > Thanks, > > Vahram. > > > > > > >
