You changed to 6 nodes because you were running out of disk? But you still replicate 100% to all so you don't gain anything
> On 10 Apr 2017, at 13:48, Cogumelos Maravilha <cogumelosmaravi...@sapo.pt> > wrote: > > No. > > nodetool status, nodetool describecluster also nodetool ring shows a correct > cluster. > > Not all nodes needs to be a seed, but can be. > > I had also ALTER KEYSPACE system_auth WITH REPLICATION = { 'class' : > 'SimpleStrategy', 'replication_factor' : 6 } AND durable_writes = false; > > And the first command on the new node was nodetool repair system_auth > > >> On 04/10/2017 12:37 PM, Chris Mawata wrote: >> Notice >> .SimpleSeedProvider{seeds=10.100.100.19, 10.100.100.85, 10.100.100.185, >> 10.100.100.161, 10.100.100.52, 10.100.1000.213}; >> >> Why do you have all six of your nodes as seeds? is it possible that the last >> one you added used itself as the seed and is isolated? >> >> On Thu, Apr 6, 2017 at 6:48 AM, Cogumelos Maravilha >> <cogumelosmaravi...@sapo.pt> wrote: >>> Yes C* is running as cassandra: >>> >>> cassand+ 2267 1 99 10:18 ? 00:02:56 java >>> -Xloggc:/var/log/cassandra/gc.log -ea -XX:+UseThreadPriorities -XX:Threa... >>> >>> INFO [main] 2017-04-06 10:35:42,956 Config.java:474 - Node >>> configuration:[allocate_tokens_for_keyspace=null; >>> authenticator=PasswordAuthenticator; authorizer=CassandraAuthorizer; >>> auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; >>> back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, >>> factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50; >>> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; >>> broadcast_address=null; broadcast_rpc_address=null; >>> buffer_pool_use_heap_if_exhausted=true; >>> cas_contention_timeout_in_ms=6000000; cdc_enabled=false; >>> cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; >>> cdc_total_space_in_mb=0; client_encryption_options=<REDACTED>; >>> cluster_name=company; column_index_cache_size_in_kb=2; >>> column_index_size_in_kb=64; commit_failure_policy=ignore; >>> commitlog_compression=null; commitlog_directory=/mnt/cassandra/commitlog; >>> commitlog_max_compression_buffers_in_pool=3; >>> commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; >>> commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; >>> commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; >>> compaction_large_partition_warning_threshold_mb=100; >>> compaction_throughput_mb_per_sec=16; concurrent_compactors=null; >>> concurrent_counter_writes=32; concurrent_materialized_view_writes=32; >>> concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; >>> counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; >>> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=6000000; >>> credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; >>> credentials_validity_in_ms=2000; cross_node_timeout=false; >>> data_file_directories=[Ljava.lang.String;@223f3642; disk_access_mode=auto; >>> disk_failure_policy=ignore; disk_optimization_estimate_percentile=0.95; >>> disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; >>> dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; >>> dynamic_snitch_reset_interval_in_ms=600000; >>> dynamic_snitch_update_interval_in_ms=100; >>> enable_scripted_user_defined_functions=false; >>> enable_user_defined_functions=false; >>> enable_user_defined_functions_threads=true; encryption_options=null; >>> endpoint_snitch=SimpleSnitch; file_cache_size_in_mb=null; >>> gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; >>> hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; >>> hinted_handoff_throttle_in_kb=1024; hints_compression=null; >>> hints_directory=/mnt/cassandra/hints; hints_flush_period_in_ms=10000; >>> incremental_backups=false; index_interval=null; >>> index_summary_capacity_in_mb=null; >>> index_summary_resize_interval_in_minutes=60; initial_token=null; >>> inter_dc_stream_throughput_outbound_megabits_per_sec=200; >>> inter_dc_tcp_nodelay=false; internode_authenticator=null; >>> internode_compression=dc; internode_recv_buff_size_in_bytes=0; >>> internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; >>> key_cache_save_period=14400; key_cache_size_in_mb=null; >>> listen_address=10.100.100.213; listen_interface=null; >>> listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; >>> max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; >>> max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; >>> max_streaming_retries=3; max_value_size_in_mb=256; >>> memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; >>> memtable_flush_writers=0; memtable_heap_space_in_mb=null; >>> memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; >>> native_transport_max_concurrent_connections=-1; >>> native_transport_max_concurrent_connections_per_ip=-1; >>> native_transport_max_frame_size_in_mb=256; >>> native_transport_max_threads=128; native_transport_port=9042; >>> native_transport_port_ssl=null; num_tokens=256; >>> otc_coalescing_strategy=TIMEHORIZON; otc_coalescing_window_us=200; >>> partitioner=org.apache.cassandra.dht.Murmur3Partitioner; >>> permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; >>> permissions_validity_in_ms=2000; phi_convict_threshold=8.0; >>> prepared_statements_cache_size_mb=null; >>> range_request_timeout_in_ms=6000000; read_request_timeout_in_ms=6000000; >>> request_scheduler=org.apache.cassandra.scheduler.NoScheduler; >>> request_scheduler_id=null; request_scheduler_options=null; >>> request_timeout_in_ms=6000000; role_manager=CassandraRoleManager; >>> roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; >>> roles_validity_in_ms=2000; >>> row_cache_class_name=org.apache.cassandra.cache.OHCProvider; >>> row_cache_keys_to_save=2147483647; row_cache_save_period=0; >>> row_cache_size_in_mb=0; rpc_address=10.100.100.213; rpc_interface=null; >>> rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; >>> rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; >>> rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; >>> rpc_server_type=sync; saved_caches_directory=/mnt/cassandra/saved_caches; >>> seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=10.100.100.19, >>> 10.100.100.85, 10.100.100.185, 10.100.100.161, 10.100.100.52, >>> 10.100.1000.213}; server_encryption_options=<REDACTED>; >>> slow_query_log_timeout_in_ms=6000000; snapshot_before_compaction=false; >>> ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; >>> start_native_transport=true; start_rpc=false; storage_port=7000; >>> stream_throughput_outbound_megabits_per_sec=200; >>> streaming_keep_alive_period_in_secs=300; >>> streaming_socket_timeout_in_ms=86400000; >>> thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; >>> thrift_prepared_statements_cache_size_mb=null; >>> tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; >>> tracetype_query_ttl=86400; tracetype_repair_ttl=604800; >>> transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@38c5cc4c; >>> trickle_fsync=false; trickle_fsync_interval_in_kb=10240; >>> truncate_request_timeout_in_ms=6000000; >>> unlogged_batch_across_partitions_warn_threshold=10; >>> user_defined_function_fail_timeout=1500; >>> user_defined_function_warn_timeout=500; user_function_timeout_policy=die; >>> windows_timer_interval=1; write_request_timeout_in_ms=6000000] >>> Thanks >>> >>> >>>> On 04/06/2017 11:30 AM, benjamin roth wrote: >>>> Have you checked the effective limits of a running CS process? >>>> Is CS run as Cassandra? Just to rule out missing file perms. >>>> >>>> >>>> Am 06.04.2017 12:24 schrieb "Cogumelos Maravilha" >>>> <cogumelosmaravi...@sapo.pt>: >>>> From cassandra.yaml: >>>> >>>> hints_directory: /mnt/cassandra/hints >>>> data_file_directories: >>>> - /mnt/cassandra/data >>>> commitlog_directory: /mnt/cassandra/commitlog >>>> saved_caches_directory: /mnt/cassandra/saved_caches >>>> drwxr-xr-x 3 cassandra cassandra 23 Apr 5 16:03 mnt/ >>>> >>>> drwxr-xr-x 6 cassandra cassandra 68 Apr 5 16:17 ./ >>>> drwxr-xr-x 3 cassandra cassandra 23 Apr 5 16:03 ../ >>>> drwxr-xr-x 2 cassandra cassandra 80 Apr 6 10:07 commitlog/ >>>> drwxr-xr-x 8 cassandra cassandra 124 Apr 5 16:17 data/ >>>> drwxr-xr-x 2 cassandra cassandra 72 Apr 5 16:20 hints/ >>>> drwxr-xr-x 2 cassandra cassandra 49 Apr 5 20:17 saved_caches/ >>>> >>>> cassand+ 2267 1 99 10:18 ? >>>> 00:02:56 java -Xloggc:/var/log/cassandra/gc.log -ea >>>> -XX:+UseThreadPriorities -XX:Threa... >>>> /dev/mapper/um_vg-xfs_lv 885G 27G 858G 4% /mnt >>>> >>>> On /etc/security/limits.conf >>>> >>>> * - memlock unlimited >>>> * - nofile 100000 >>>> * - nproc 32768 >>>> * - as unlimited >>>> >>>> On /etc/security/limits.d/cassandra.conf >>>> >>>> cassandra - memlock unlimited >>>> cassandra - nofile 100000 >>>> cassandra - as unlimited >>>> cassandra - nproc 32768 >>>> On /etc/sysctl.conf >>>> >>>> vm.max_map_count = 1048575 >>>> >>>> On /etc/systcl.d/cassanda.conf >>>> >>>> vm.max_map_count = 1048575 >>>> net.ipv4.tcp_keepalive_time=600 >>>> On /etc/pam.d/su >>>> ... >>>> session required pam_limits.so >>>> ... >>>> >>>> Distro is the currently Ubuntu LTS. >>>> Thanks >>>> >>>> >>>> >>>>> On 04/06/2017 10:39 AM, benjamin roth wrote: >>>>> Cassandra cannot write an SSTable to disk. Are you sure the >>>>> disk/volume where SSTables reside (normally >>>>> /var/lib/cassandra/data) is writeable for the CS user and has enough free >>>>> space? >>>>> The CDC warning also implies that. >>>>> The other warnings indicate you are probably not running CS as root and >>>>> you did not set an appropriate limit for max open files. Running out of >>>>> open files can also be a reason for the IO error. >>>>> >>>>> 2017-04-06 11:34 GMT+02:00 Cogumelos Maravilha >>>>> <cogumelosmaravi...@sapo.pt>: >>>>>> Hi list, >>>>>> >>>>>> I'm using C* 3.10 in a 6 nodes cluster RF=2. All instances type >>>>>> i3.xlarge (AWS) with 32GB, 2 cores and SSD LVM XFS formated 885G. I have >>>>>> one node that is always dieing and I don't understand why. Can >>>>>> anyone >>>>>> give me some hints please. All nodes using the same configuration. >>>>>> >>>>>> Thanks in advance. >>>>>> >>>>>> INFO [IndexSummaryManager:1] 2017-04-06 05:22:18,352 >>>>>> IndexSummaryRedistribution.java:75 - Redistributing index summaries >>>>>> ERROR [MemtablePostFlush:22] 2017-04-06 06:00:26,800 >>>>>> CassandraDaemon.java:229 - Exception in thread >>>>>> Thread[MemtablePostFlush:22,5,main] >>>>>> org.apache.cassandra.io.FSWriteError: java.io.IOException: Input/output >>>>>> error >>>>>> at >>>>>> org.apache.cassandra.io.util.SequentialWriter.syncDataOnlyInternal(SequentialWriter.java:173) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.util.SequentialWriter.syncInternal(SequentialWriter.java:185) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.compress.CompressedSequentialWriter.access$100(CompressedSequentialWriter.java:38) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.compress.CompressedSequentialWriter$TransactionalProxy.doPrepare(CompressedSequentialWriter.java:307) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.util.SequentialWriter.prepareToCommit(SequentialWriter.java:358) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:367) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.sstable.format.SSTableWriter.prepareToCommit(SSTableWriter.java:281) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.prepareToCommit(SimpleSSTableMultiWriter.java:101) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1153) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1086) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >>>>>> ~[na:1.8.0_121] >>>>>> at >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >>>>>> [na:1.8.0_121] >>>>>> at >>>>>> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) >>>>>> [apache-cassandra-3.10.jar:3.10] >>>>>> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121] >>>>>> Caused by: java.io.IOException: Input/output error >>>>>> at sun.nio.ch.FileDispatcherImpl.force0(Native Method) >>>>>> ~[na:1.8.0_121] >>>>>> at sun.nio.ch.FileDispatcherImpl.force(FileDispatcherImpl.java:76) >>>>>> ~[na:1.8.0_121] >>>>>> at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:388) >>>>>> ~[na:1.8.0_121] >>>>>> at org.apache.cassandra.utils.SyncUtil.force(SyncUtil.java:158) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.util.SequentialWriter.syncDataOnlyInternal(SequentialWriter.java:169) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> ... 15 common frames omitted >>>>>> INFO [IndexSummaryManager:1] 2017-04-06 06:22:18,366 >>>>>> IndexSummaryRedistribution.java:75 - Redistributing index summaries >>>>>> ERROR [MemtablePostFlush:31] >>>>>> 2017-04-06 06:39:19,525 >>>>>> CassandraDaemon.java:229 - Exception in thread >>>>>> Thread[MemtablePostFlush:31,5,main] >>>>>> org.apache.cassandra.io.FSWriteError: java.io.IOException: Input/output >>>>>> error >>>>>> at >>>>>> org.apache.cassandra.io.util.SequentialWriter.syncDataOnlyInternal(SequentialWriter.java:173) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.util.SequentialWriter.syncInternal(SequentialWriter.java:185) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.compress.CompressedSequentialWriter.access$100(CompressedSequentialWriter.java:38) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.compress.CompressedSequentialWriter$TransactionalProxy.doPrepare(CompressedSequentialWriter.java:307) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.util.SequentialWriter.prepareToCommit(SequentialWriter.java:358) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:367) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.sstable.format.SSTableWriter.prepareToCommit(SSTableWriter.java:281) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.prepareToCommit(SimpleSSTableMultiWriter.java:101) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1153) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1086) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >>>>>> ~[na:1.8.0_121] >>>>>> at >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >>>>>> [na:1.8.0_121] >>>>>> at >>>>>> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) >>>>>> [apache-cassandra-3.10.jar:3.10] >>>>>> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121] >>>>>> Caused by: java.io.IOException: Input/output error >>>>>> at sun.nio.ch.FileDispatcherImpl.force0(Native Method) >>>>>> ~[na:1.8.0_121] >>>>>> at sun.nio.ch.FileDispatcherImpl.force(FileDispatcherImpl.java:76) >>>>>> ~[na:1.8.0_121] >>>>>> at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:388) >>>>>> ~[na:1.8.0_121] >>>>>> at org.apache.cassandra.utils.SyncUtil.force(SyncUtil.java:158) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> at >>>>>> org.apache.cassandra.io.util.SequentialWriter.syncDataOnlyInternal(SequentialWriter.java:169) >>>>>> ~[apache-cassandra-3.10.jar:3.10] >>>>>> ... 15 common frames omitted >>>>>> INFO [main] 2017-04-06 07:11:57,289 YamlConfigurationLoader.java:89 - >>>>>> Configuration location: file:/etc/cassandra/cassandra.yaml >>>>>> >>>>>> >>>>>> Some ERRORs messages: >>>>>> >>>>>> ERROR [MemtablePostFlush:2] 2017-04-05 23:35:46,339 >>>>>> CassandraDaemon.java:229 - Exception in thread >>>>>> Thread[MemtablePostFlush:2,5,main] >>>>>> ERROR [MemtablePostFlush:3] 2017-04-05 23:44:08,471 >>>>>> CassandraDaemon.java:229 - Exception in thread >>>>>> Thread[MemtablePostFlush:3,5,main] >>>>>> ERROR [MemtablePostFlush:4] 2017-04-05 23:54:41,224 >>>>>> CassandraDaemon.java:229 - Exception in thread >>>>>> Thread[MemtablePostFlush:4,5,main] >>>>>> ERROR [MessagingService-Incoming-/10.0.120.52] 2017-04-06 03:19:13,453 >>>>>> CassandraDaemon.java:229 - Exception in thread >>>>>> Thread[MessagingService-Incoming-/10.0.120.52,5,main] >>>>>> ERROR [epollEventLoopGroup-2-6] 2017-04-06 03:24:41,006 >>>>>> CassandraDaemon.java:229 - Exception in thread >>>>>> Thread[epollEventLoopGroup-2-6,10,main] >>>>>> ERROR [Native-Transport-Requests-36] 2017-04-06 03:25:45,915 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-49] 2017-04-06 03:25:45,915 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [IndexSummaryManager:1] 2017-04-06 03:25:45,915 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-69] 2017-04-06 03:25:45,916 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-46] 2017-04-06 03:26:18,465 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [SharedPool-Worker-136] 2017-04-06 03:26:18,465 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-156] 2017-04-06 03:26:18,465 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [SharedPool-Worker-92] 2017-04-06 03:26:24,696 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-48] 2017-04-06 03:26:24,696 ?:? - JVM >>>>>> state determined to be unstable. Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-66] 2017-04-06 03:26:55,808 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-77] 2017-04-06 03:26:55,808 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [GossipTasks:1] 2017-04-06 03:26:55,808 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-133] 2017-04-06 03:26:55,808 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-135] 2017-04-06 03:26:55,808 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [ScheduledFastTasks:1] 2017-04-06 03:26:55,808 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-70] 2017-04-06 03:27:11,569 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [IndexSummaryManager:1] 2017-04-06 03:27:17,821 >>>>>> CassandraDaemon.java:229 - Exception in thread >>>>>> Thread[IndexSummaryManager:1,1,main] >>>>>> ERROR [Native-Transport-Requests-103] 2017-04-06 03:27:24,049 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-69] 2017-04-06 03:27:24,049 >>>>>> SEPWorker.java:145 - Failed to execute task, unexpected exception killed >>>>>> worker: {} >>>>>> ERROR [SharedPool-Worker-98] 2017-04-06 03:27:24,049 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [MessagingService-Incoming-/10.0.120.52] 2017-04-06 03:27:55,079 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [epollEventLoopGroup-2-5] 2017-04-06 03:27:55,079 >>>>>> JVMStabilityInspector.java:142 - JVM state determined to be unstable. >>>>>> Exiting forcefully due to: >>>>>> ERROR [Native-Transport-Requests-64] 2017-04-06 03:28:43,285 >>>>>> SEPWorker.java:145 - Failed to execute task, unexpected exception killed >>>>>> worker: {} >>>>>> ERROR [MemtablePostFlush:22] 2017-04-06 06:00:26,800 >>>>>> CassandraDaemon.java:229 - Exception in thread >>>>>> Thread[MemtablePostFlush:22,5,main] >>>>>> ERROR [MemtablePostFlush:31] >>>>>> 2017-04-06 06:39:19,525 >>>>>> CassandraDaemon.java:229 - Exception in thread >>>>>> Thread[MemtablePostFlush:31,5,main] >>>>>> >>>>>> Also some WARNs: >>>>>> >>>>>> WARN [main] 2017-04-06 09:26:49,725 CLibrary.java:178 - Unable to lock >>>>>> JVM memory (ENOMEM). This can result in part of the JVM being swapped >>>>>> out, especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK or run >>>>>> Cassandra as root. >>>>>> >>>>>> WARN [main] 2017-04-06 09:25:07,355 StartupChecks.java:157 - JMX is not >>>>>> enabled to receive remote connections. Please see cassandra-env.sh for >>>>>> more info. >>>>>> >>>>>> WARN [main] 2017-04-06 09:25:07,369 SigarLibrary.java:174 - Cassandra >>>>>> server running in degraded mode. Is swap disabled? : true, Address >>>>>> space adequate? : true, nofile limit adequate? : false, nproc limit >>>>>> adequate? : true >>>>>> >>>>>> WARN [main] 2017-04-06 09:25:07,091 DatabaseDescriptor.java:493 - Small >>>>>> cdc volume detected at /var/lib/cassandra/cdc_raw; setting >>>>>> cdc_total_space_in_mb to 2502. >>>>>> You can override this in cassandra.yaml >>>>>> >>>>>> >>>>> >>>> >>>> >>> >> >