FWIW, in our case, the GC was not the problem with Ignite. The heap issue
was already diagnosed, well known and unrelated. The problem was that the
slow down in one node was causing all the other nodes in the grid to
basically lock when reading from the cache, without suffering any GC issue.
A simple problem in one node should not be able to cause your grid to go
down, specially when talking about replicated caches.

GC issues, unrelated to Ignite, will probably appear associated with the
problem because it's one of the most common ways a Java app slows down
without dying completely.

My 2ec
D.


2016-09-16 18:19 GMT+02:00 Ignitebie [via Apache Ignite Users] <
ml-node+s70518n7807...@n6.nabble.com>:

> That would be topic for discussion on how off heap actually work. My
> understanding  is to start with object creation will happen on heap (YG)
> and then moved to Old or Off heap.
>
> If allocation of object creation (I believe after that only you will
> associating them in a map (key.value cache), is quit high, say larger than
> YG size , per sec, then application will be engaged in GC triggered due to
> not being able to allocation.
>
>
> Do you see GC trigerred for YG and messages such as Allocation Failure.
> Have you enabled GC logging.
>
>
>
> On Fri, Sep 16, 2016 at 2:47 PM, yfernando <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=7807&i=0>> wrote:
>
>> Thanks for your reply Anmol. Do you know if there is a bug logged against
>> this which we can track?
>>
>> Also it's not clear why the nodes would require to GC because all the
>> caches
>> are held off-heap and we have  a 10G heap running G1GC.
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/One-failing-node-stalling-the-whole-
>> cluster-tp5372p7799.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/One-
> failing-node-stalling-the-whole-cluster-tp5372p7807.html
> To unsubscribe from One failing node stalling the whole cluster, click
> here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5372&code=ZC5sb3Blei5qQGdtYWlsLmNvbXw1MzcyfDIwNTkzNjQ3OTE=>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/One-failing-node-stalling-the-whole-cluster-tp5372p7808.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to