[jira] [Updated] (CASSANDRA-9938) Significant GC pauses
[ https://issues.apache.org/jira/browse/CASSANDRA-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-9938: --- Fix Version/s: 2.1.x Significant GC pauses - Key: CASSANDRA-9938 URL: https://issues.apache.org/jira/browse/CASSANDRA-9938 Project: Cassandra Issue Type: Bug Components: Core Environment: Ubuntu 14.04, Java 1.8.0u45 Reporter: Robbie Strickland Labels: gc Fix For: 2.1.x Attachments: gc_log.txt We have an 18-node analytics cluster, running 2.1.7 patched with CASSANDRA-9662. On a couple of the nodes we are seeing very long GC pauses, especially in old gen, and little space is reclaimed. Eventually these nodes OOM: {code} ERROR [SharedPool-Worker-167] 2015-07-30 00:36:20,746 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Java heap space {code} We use G1 with the following settings: Max heap = 16G New size = 1.6G +UseTLAB +ResizeTLAB +PerfDisableSharedMem -UseBiasedLocking The nodes in question have average load profiles for the cluster, and caches are disabled on all tables. There is no obvious difference with the problematic nodes, and no other clear signs of trouble. Unfortunately we're currently getting an assertion error when trying to get a heap dump, or I would post that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9938) Significant GC pauses
[ https://issues.apache.org/jira/browse/CASSANDRA-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robbie Strickland updated CASSANDRA-9938: - Description: We have an 18-node analytics cluster, running 2.1.7 patched with CASSANDRA-9662. On a couple of the nodes we are seeing very long GC pauses, especially in old gen, and little space is reclaimed. Eventually these nodes OOM: {code} ERROR [SharedPool-Worker-167] 2015-07-30 00:36:20,746 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Java heap space {code} We use G1 with the following settings: Max heap = 16G New size = 1.6G +UseTLAB +ResizeTLAB +PerfDisableSharedMem -UseBiasedLocking The nodes in question have average load profiles for the cluster, and caches are disabled on all tables. There is no obvious difference with the problematic nodes. Unfortunately we're currently getting an assertion error when trying to get a heap dump, or I would post that. was: We have an 18-node analytics cluster, running 2.1.7 patched with CASSANDRA-9662. On a couple of the nodes we are seeing very long GC pauses, especially in old gen. Eventually these nodes OOM: {code} ERROR [SharedPool-Worker-167] 2015-07-30 00:36:20,746 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Java heap space {code} We use G1 with the following settings: Max heap = 16G New size = 1.6G +UseTLAB +ResizeTLAB +PerfDisableSharedMem -UseBiasedLocking The nodes in question have average load profiles for the cluster, and caches are disabled on all tables. There is no obvious difference with the problematic nodes. Unfortunately we're currently getting an assertion error when trying to get a heap dump, or I would post that. Significant GC pauses - Key: CASSANDRA-9938 URL: https://issues.apache.org/jira/browse/CASSANDRA-9938 Project: Cassandra Issue Type: Bug Components: Core Environment: Ubuntu 14.04, Java 1.8.0u45 Reporter: Robbie Strickland Labels: gc Attachments: gc_log.txt We have an 18-node analytics cluster, running 2.1.7 patched with CASSANDRA-9662. On a couple of the nodes we are seeing very long GC pauses, especially in old gen, and little space is reclaimed. Eventually these nodes OOM: {code} ERROR [SharedPool-Worker-167] 2015-07-30 00:36:20,746 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Java heap space {code} We use G1 with the following settings: Max heap = 16G New size = 1.6G +UseTLAB +ResizeTLAB +PerfDisableSharedMem -UseBiasedLocking The nodes in question have average load profiles for the cluster, and caches are disabled on all tables. There is no obvious difference with the problematic nodes. Unfortunately we're currently getting an assertion error when trying to get a heap dump, or I would post that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9938) Significant GC pauses
[ https://issues.apache.org/jira/browse/CASSANDRA-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robbie Strickland updated CASSANDRA-9938: - Description: We have an 18-node analytics cluster, running 2.1.7 patched with CASSANDRA-9662. On a couple of the nodes we are seeing very long GC pauses, especially in old gen, and little space is reclaimed. Eventually these nodes OOM: {code} ERROR [SharedPool-Worker-167] 2015-07-30 00:36:20,746 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Java heap space {code} We use G1 with the following settings: Max heap = 16G New size = 1.6G +UseTLAB +ResizeTLAB +PerfDisableSharedMem -UseBiasedLocking The nodes in question have average load profiles for the cluster, and caches are disabled on all tables. There is no obvious difference with the problematic nodes, and no other clear signs of trouble. Unfortunately we're currently getting an assertion error when trying to get a heap dump, or I would post that. was: We have an 18-node analytics cluster, running 2.1.7 patched with CASSANDRA-9662. On a couple of the nodes we are seeing very long GC pauses, especially in old gen, and little space is reclaimed. Eventually these nodes OOM: {code} ERROR [SharedPool-Worker-167] 2015-07-30 00:36:20,746 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Java heap space {code} We use G1 with the following settings: Max heap = 16G New size = 1.6G +UseTLAB +ResizeTLAB +PerfDisableSharedMem -UseBiasedLocking The nodes in question have average load profiles for the cluster, and caches are disabled on all tables. There is no obvious difference with the problematic nodes. Unfortunately we're currently getting an assertion error when trying to get a heap dump, or I would post that. Significant GC pauses - Key: CASSANDRA-9938 URL: https://issues.apache.org/jira/browse/CASSANDRA-9938 Project: Cassandra Issue Type: Bug Components: Core Environment: Ubuntu 14.04, Java 1.8.0u45 Reporter: Robbie Strickland Labels: gc Attachments: gc_log.txt We have an 18-node analytics cluster, running 2.1.7 patched with CASSANDRA-9662. On a couple of the nodes we are seeing very long GC pauses, especially in old gen, and little space is reclaimed. Eventually these nodes OOM: {code} ERROR [SharedPool-Worker-167] 2015-07-30 00:36:20,746 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Java heap space {code} We use G1 with the following settings: Max heap = 16G New size = 1.6G +UseTLAB +ResizeTLAB +PerfDisableSharedMem -UseBiasedLocking The nodes in question have average load profiles for the cluster, and caches are disabled on all tables. There is no obvious difference with the problematic nodes, and no other clear signs of trouble. Unfortunately we're currently getting an assertion error when trying to get a heap dump, or I would post that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9938) Significant GC pauses
[ https://issues.apache.org/jira/browse/CASSANDRA-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robbie Strickland updated CASSANDRA-9938: - Attachment: gc_log.txt Significant GC pauses - Key: CASSANDRA-9938 URL: https://issues.apache.org/jira/browse/CASSANDRA-9938 Project: Cassandra Issue Type: Bug Components: Core Environment: Ubuntu 14.04, Java 1.8.0u45 Reporter: Robbie Strickland Labels: gc Attachments: gc_log.txt We have an 18-node analytics cluster, running 2.1.7 patched with CASSANDRA-9662. On a couple of the nodes we are seeing very long GC pauses, especially in old gen. Eventually these nodes OOM: {code} ERROR [SharedPool-Worker-167] 2015-07-30 00:36:20,746 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Java heap space {code} We use G1 with the following settings: Max heap = 16G New size = 1.6G +UseTLAB +ResizeTLAB +PerfDisableSharedMem -UseBiasedLocking The nodes in question have average load profiles for the cluster, and caches are disabled on all tables. There is no obvious difference with the problematic nodes. Unfortunately we're currently getting an assertion error when trying to get a heap dump, or I would post that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9938) Significant GC pauses
[ https://issues.apache.org/jira/browse/CASSANDRA-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robbie Strickland updated CASSANDRA-9938: - Fix Version/s: (was: 2.1.x) Significant GC pauses - Key: CASSANDRA-9938 URL: https://issues.apache.org/jira/browse/CASSANDRA-9938 Project: Cassandra Issue Type: Bug Components: Core Environment: Ubuntu 14.04, Java 1.8.0u45 Reporter: Robbie Strickland Labels: gc Attachments: gc_log.txt We have an 18-node analytics cluster, running 2.1.7 patched with CASSANDRA-9662. On a couple of the nodes we are seeing very long GC pauses, especially in old gen, and little space is reclaimed. Eventually these nodes OOM: {code} ERROR [SharedPool-Worker-167] 2015-07-30 00:36:20,746 JVMStabilityInspector.java:94 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Java heap space {code} We use G1 with the following settings: Max heap = 16G New size = 1.6G +UseTLAB +ResizeTLAB +PerfDisableSharedMem -UseBiasedLocking The nodes in question have average load profiles for the cluster, and caches are disabled on all tables. There is no obvious difference with the problematic nodes, and no other clear signs of trouble. Unfortunately we're currently getting an assertion error when trying to get a heap dump, or I would post that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)