Re: error='Cannot allocate memory' (errno=12)

2015-05-12 Thread J. Ryan Earl
I see your ulimit -a above, missed that.  You should increase nofile
ulimit.  If you used JNA, you'd need to increase memlock too, but you
probably aren't using JNA.  1024 nofile is default and far too small, try
making that like 64K.  Thread handles can count against file descriptors
limits, similar to how pipes, sockets, etc can count against file
descriptor limits.

On Tue, May 12, 2015 at 7:55 PM, J. Ryan Earl  wrote:

> What's your ulimit -a output?  Did you adjust nproc and nofile ulimits
> up?  Do you have JNA installed?  What about memlock ulimit and in
> sysctl.conf => kernel.shmmax?
>
> What's in cassandra.log?
>
> On Mon, May 11, 2015 at 7:24 AM, Rahul Bhardwaj <
> rahul.bhard...@indiamart.com> wrote:
>
>> Hi Robert,
>>
>> I saw somewhere you answering the same prob but no solution found. plz
>> check again.
>>
>>
>> regards:
>> Rahul Bhardwaj
>>
>>
>>
>>
>> On Mon, May 11, 2015 at 5:49 PM, Rahul Bhardwaj <
>> rahul.bhard...@indiamart.com> wrote:
>>
>>> Bu it is giving same error with java 7 and open jdk
>>>
>>> On Mon, May 11, 2015 at 5:26 PM, Anishek Agarwal 
>>> wrote:
>>>
 Well i havent used 2.1.x cassandra neither java 8 but any reason for
 not using oracle JDK as i thought thats what is recommended. i saw a thread
 earlier stating java 8 with 2.0.14+ cassandra is tested but not sure about
 2.1.x versions.


 On Mon, May 11, 2015 at 4:04 PM, Rahul Bhardwaj <
 rahul.bhard...@indiamart.com> wrote:

> PFA of error log​
>  hs_err_pid9656.log
> 
> ​
>
> On Mon, May 11, 2015 at 3:58 PM, Rahul Bhardwaj <
> rahul.bhard...@indiamart.com> wrote:
>
>> free RAM:
>>
>>
>> free -m
>>  total   used   free sharedbuffers
>> cached
>> Mem: 64398  23753  40644  0108
>> 8324
>> -/+ buffers/cache:  15319  49078
>> Swap: 2925 15   2909
>>
>>
>>  ulimit -a
>> core file size  (blocks, -c) 0
>> data seg size   (kbytes, -d) unlimited
>> scheduling priority (-e) 0
>> file size   (blocks, -f) unlimited
>> pending signals (-i) 515041
>> max locked memory   (kbytes, -l) 64
>> max memory size (kbytes, -m) unlimited
>> open files  (-n) 1024
>> pipe size(512 bytes, -p) 8
>> POSIX message queues (bytes, -q) 819200
>> real-time priority  (-r) 0
>> stack size  (kbytes, -s) 10240
>> cpu time   (seconds, -t) unlimited
>> max user processes  (-u) 515041
>> virtual memory  (kbytes, -v) unlimited
>> file locks  (-x) unlimited
>>
>>
>> Also attaching complete error file
>>
>>
>> On Mon, May 11, 2015 at 3:35 PM, Anishek Agarwal 
>> wrote:
>>
>>> the memory cassandra is trying to allocate is pretty small. you sure
>>> there is no hardware failure on the machine. what is the free ram on the
>>> box ?
>>>
>>> On Mon, May 11, 2015 at 3:28 PM, Rahul Bhardwaj <
>>> rahul.bhard...@indiamart.com> wrote:
>>>
 Hi All,

 We have cluster of 3 nodes with  64GB RAM each. My cluster was
 running in healthy state. Suddenly one machine's cassandra daemon stops
 working and shut down.

 On restarting it after 2 minutes it again stops and is getting stop
 after returning below error in cassandra.log

 Java HotSpot(TM) 64-Bit Server VM warning: INFO:
 os::commit_memory(0x7fd064dc6000, 12288, 0) failed; error='Cannot
 allocate memory' (errno=12)
 #
 # There is insufficient memory for the Java Runtime Environment to
 continue.
 # Native memory allocation (malloc) failed to allocate 12288 bytes
 for committing reserved memory.
 # An error report file with more information is saved as:
 # /tmp/hs_err_pid23215.log
 INFO  09:50:41 Loading settings from
 file:/etc/cassandra/default.conf/cassandra.yaml
 INFO  09:50:41 Node
 configuration:[authenticator=AllowAllAuthenticator;
 authorizer=AllowAllAuthorizer; auto_snapshot=true;
 batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024;
 cas_contention_timeout_in_ms=1000; 
 client_encryption_options=;
 cluster_name=Test Cluster; column_index_size_in_kb=64;
 commit_failure_policy=stop;
 commitlog_directory=/var/lib/cassandra/commitlog;
 commitlog_segment_size_in_mb=64; commitlog_sync=periodic;
 commitlog_sync_period_in_ms=1; compaction_throughput_mb_per_sec=16;
 concurrent_compactors=4; concurrent_counter_writes=32; 
 concurrent_

Re: error='Cannot allocate memory' (errno=12)

2015-05-12 Thread J. Ryan Earl
What's your ulimit -a output?  Did you adjust nproc and nofile ulimits up?
Do you have JNA installed?  What about memlock ulimit and in sysctl.conf =>
kernel.shmmax?

What's in cassandra.log?

On Mon, May 11, 2015 at 7:24 AM, Rahul Bhardwaj <
rahul.bhard...@indiamart.com> wrote:

> Hi Robert,
>
> I saw somewhere you answering the same prob but no solution found. plz
> check again.
>
>
> regards:
> Rahul Bhardwaj
>
>
>
>
> On Mon, May 11, 2015 at 5:49 PM, Rahul Bhardwaj <
> rahul.bhard...@indiamart.com> wrote:
>
>> Bu it is giving same error with java 7 and open jdk
>>
>> On Mon, May 11, 2015 at 5:26 PM, Anishek Agarwal 
>> wrote:
>>
>>> Well i havent used 2.1.x cassandra neither java 8 but any reason for not
>>> using oracle JDK as i thought thats what is recommended. i saw a thread
>>> earlier stating java 8 with 2.0.14+ cassandra is tested but not sure about
>>> 2.1.x versions.
>>>
>>>
>>> On Mon, May 11, 2015 at 4:04 PM, Rahul Bhardwaj <
>>> rahul.bhard...@indiamart.com> wrote:
>>>
 PFA of error log​
  hs_err_pid9656.log
 
 ​

 On Mon, May 11, 2015 at 3:58 PM, Rahul Bhardwaj <
 rahul.bhard...@indiamart.com> wrote:

> free RAM:
>
>
> free -m
>  total   used   free sharedbuffers
> cached
> Mem: 64398  23753  40644  0108
> 8324
> -/+ buffers/cache:  15319  49078
> Swap: 2925 15   2909
>
>
>  ulimit -a
> core file size  (blocks, -c) 0
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 515041
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 10240
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 515041
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>
>
> Also attaching complete error file
>
>
> On Mon, May 11, 2015 at 3:35 PM, Anishek Agarwal 
> wrote:
>
>> the memory cassandra is trying to allocate is pretty small. you sure
>> there is no hardware failure on the machine. what is the free ram on the
>> box ?
>>
>> On Mon, May 11, 2015 at 3:28 PM, Rahul Bhardwaj <
>> rahul.bhard...@indiamart.com> wrote:
>>
>>> Hi All,
>>>
>>> We have cluster of 3 nodes with  64GB RAM each. My cluster was
>>> running in healthy state. Suddenly one machine's cassandra daemon stops
>>> working and shut down.
>>>
>>> On restarting it after 2 minutes it again stops and is getting stop
>>> after returning below error in cassandra.log
>>>
>>> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
>>> os::commit_memory(0x7fd064dc6000, 12288, 0) failed; error='Cannot
>>> allocate memory' (errno=12)
>>> #
>>> # There is insufficient memory for the Java Runtime Environment to
>>> continue.
>>> # Native memory allocation (malloc) failed to allocate 12288 bytes
>>> for committing reserved memory.
>>> # An error report file with more information is saved as:
>>> # /tmp/hs_err_pid23215.log
>>> INFO  09:50:41 Loading settings from
>>> file:/etc/cassandra/default.conf/cassandra.yaml
>>> INFO  09:50:41 Node
>>> configuration:[authenticator=AllowAllAuthenticator;
>>> authorizer=AllowAllAuthorizer; auto_snapshot=true;
>>> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024;
>>> cas_contention_timeout_in_ms=1000; client_encryption_options=;
>>> cluster_name=Test Cluster; column_index_size_in_kb=64;
>>> commit_failure_policy=stop;
>>> commitlog_directory=/var/lib/cassandra/commitlog;
>>> commitlog_segment_size_in_mb=64; commitlog_sync=periodic;
>>> commitlog_sync_period_in_ms=1; compaction_throughput_mb_per_sec=16;
>>> concurrent_compactors=4; concurrent_counter_writes=32; 
>>> concurrent_reads=32;
>>> concurrent_writes=32; counter_cache_save_period=7200;
>>> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000;
>>> cross_node_timeout=false; 
>>> data_file_directories=[/var/lib/cassandra/data];
>>> disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1;
>>> dynamic_snitch_reset_interval_in_ms=60;
>>> dynamic_snitch_update_interval_in_ms=100;
>>> endpoint_snitch=GossipingPropertyFileSnitch; 
>>> hinted_handoff_enabled=true;
>>> hinted_handoff_throttle_in_kb=1024; incremental_b

Re: error='Cannot allocate memory' (errno=12)

2015-05-11 Thread Rahul Bhardwaj
Hi Robert,

I saw somewhere you answering the same prob but no solution found. plz
check again.


regards:
Rahul Bhardwaj




On Mon, May 11, 2015 at 5:49 PM, Rahul Bhardwaj <
rahul.bhard...@indiamart.com> wrote:

> Bu it is giving same error with java 7 and open jdk
>
> On Mon, May 11, 2015 at 5:26 PM, Anishek Agarwal 
> wrote:
>
>> Well i havent used 2.1.x cassandra neither java 8 but any reason for not
>> using oracle JDK as i thought thats what is recommended. i saw a thread
>> earlier stating java 8 with 2.0.14+ cassandra is tested but not sure about
>> 2.1.x versions.
>>
>>
>> On Mon, May 11, 2015 at 4:04 PM, Rahul Bhardwaj <
>> rahul.bhard...@indiamart.com> wrote:
>>
>>> PFA of error log​
>>>  hs_err_pid9656.log
>>> 
>>> ​
>>>
>>> On Mon, May 11, 2015 at 3:58 PM, Rahul Bhardwaj <
>>> rahul.bhard...@indiamart.com> wrote:
>>>
 free RAM:


 free -m
  total   used   free sharedbuffers
 cached
 Mem: 64398  23753  40644  0108
 8324
 -/+ buffers/cache:  15319  49078
 Swap: 2925 15   2909


  ulimit -a
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 515041
 max locked memory   (kbytes, -l) 64
 max memory size (kbytes, -m) unlimited
 open files  (-n) 1024
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 515041
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited


 Also attaching complete error file


 On Mon, May 11, 2015 at 3:35 PM, Anishek Agarwal 
 wrote:

> the memory cassandra is trying to allocate is pretty small. you sure
> there is no hardware failure on the machine. what is the free ram on the
> box ?
>
> On Mon, May 11, 2015 at 3:28 PM, Rahul Bhardwaj <
> rahul.bhard...@indiamart.com> wrote:
>
>> Hi All,
>>
>> We have cluster of 3 nodes with  64GB RAM each. My cluster was
>> running in healthy state. Suddenly one machine's cassandra daemon stops
>> working and shut down.
>>
>> On restarting it after 2 minutes it again stops and is getting stop
>> after returning below error in cassandra.log
>>
>> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
>> os::commit_memory(0x7fd064dc6000, 12288, 0) failed; error='Cannot
>> allocate memory' (errno=12)
>> #
>> # There is insufficient memory for the Java Runtime Environment to
>> continue.
>> # Native memory allocation (malloc) failed to allocate 12288 bytes
>> for committing reserved memory.
>> # An error report file with more information is saved as:
>> # /tmp/hs_err_pid23215.log
>> INFO  09:50:41 Loading settings from
>> file:/etc/cassandra/default.conf/cassandra.yaml
>> INFO  09:50:41 Node
>> configuration:[authenticator=AllowAllAuthenticator;
>> authorizer=AllowAllAuthorizer; auto_snapshot=true;
>> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024;
>> cas_contention_timeout_in_ms=1000; client_encryption_options=;
>> cluster_name=Test Cluster; column_index_size_in_kb=64;
>> commit_failure_policy=stop;
>> commitlog_directory=/var/lib/cassandra/commitlog;
>> commitlog_segment_size_in_mb=64; commitlog_sync=periodic;
>> commitlog_sync_period_in_ms=1; compaction_throughput_mb_per_sec=16;
>> concurrent_compactors=4; concurrent_counter_writes=32; 
>> concurrent_reads=32;
>> concurrent_writes=32; counter_cache_save_period=7200;
>> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000;
>> cross_node_timeout=false; 
>> data_file_directories=[/var/lib/cassandra/data];
>> disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1;
>> dynamic_snitch_reset_interval_in_ms=60;
>> dynamic_snitch_update_interval_in_ms=100;
>> endpoint_snitch=GossipingPropertyFileSnitch; hinted_handoff_enabled=true;
>> hinted_handoff_throttle_in_kb=1024; incremental_backups=false;
>> index_summary_capacity_in_mb=null;
>> index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false;
>> internode_compression=all; key_cache_save_period=14400;
>> key_cache_size_in_mb=null; listen_address=null;
>> max_hint_window_in_ms=1080; max_hints_delivery_threads=2;
>> memtable_allocation_type=heap_buffers; native_transport_port=9042;
>> num_tokens=256; 

Re: error='Cannot allocate memory' (errno=12)

2015-05-11 Thread Rahul Bhardwaj
Bu it is giving same error with java 7 and open jdk

On Mon, May 11, 2015 at 5:26 PM, Anishek Agarwal  wrote:

> Well i havent used 2.1.x cassandra neither java 8 but any reason for not
> using oracle JDK as i thought thats what is recommended. i saw a thread
> earlier stating java 8 with 2.0.14+ cassandra is tested but not sure about
> 2.1.x versions.
>
>
> On Mon, May 11, 2015 at 4:04 PM, Rahul Bhardwaj <
> rahul.bhard...@indiamart.com> wrote:
>
>> PFA of error log​
>>  hs_err_pid9656.log
>> 
>> ​
>>
>> On Mon, May 11, 2015 at 3:58 PM, Rahul Bhardwaj <
>> rahul.bhard...@indiamart.com> wrote:
>>
>>> free RAM:
>>>
>>>
>>> free -m
>>>  total   used   free sharedbuffers cached
>>> Mem: 64398  23753  40644  0108   8324
>>> -/+ buffers/cache:  15319  49078
>>> Swap: 2925 15   2909
>>>
>>>
>>>  ulimit -a
>>> core file size  (blocks, -c) 0
>>> data seg size   (kbytes, -d) unlimited
>>> scheduling priority (-e) 0
>>> file size   (blocks, -f) unlimited
>>> pending signals (-i) 515041
>>> max locked memory   (kbytes, -l) 64
>>> max memory size (kbytes, -m) unlimited
>>> open files  (-n) 1024
>>> pipe size(512 bytes, -p) 8
>>> POSIX message queues (bytes, -q) 819200
>>> real-time priority  (-r) 0
>>> stack size  (kbytes, -s) 10240
>>> cpu time   (seconds, -t) unlimited
>>> max user processes  (-u) 515041
>>> virtual memory  (kbytes, -v) unlimited
>>> file locks  (-x) unlimited
>>>
>>>
>>> Also attaching complete error file
>>>
>>>
>>> On Mon, May 11, 2015 at 3:35 PM, Anishek Agarwal 
>>> wrote:
>>>
 the memory cassandra is trying to allocate is pretty small. you sure
 there is no hardware failure on the machine. what is the free ram on the
 box ?

 On Mon, May 11, 2015 at 3:28 PM, Rahul Bhardwaj <
 rahul.bhard...@indiamart.com> wrote:

> Hi All,
>
> We have cluster of 3 nodes with  64GB RAM each. My cluster was running
> in healthy state. Suddenly one machine's cassandra daemon stops working 
> and
> shut down.
>
> On restarting it after 2 minutes it again stops and is getting stop
> after returning below error in cassandra.log
>
> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
> os::commit_memory(0x7fd064dc6000, 12288, 0) failed; error='Cannot
> allocate memory' (errno=12)
> #
> # There is insufficient memory for the Java Runtime Environment to
> continue.
> # Native memory allocation (malloc) failed to allocate 12288 bytes for
> committing reserved memory.
> # An error report file with more information is saved as:
> # /tmp/hs_err_pid23215.log
> INFO  09:50:41 Loading settings from
> file:/etc/cassandra/default.conf/cassandra.yaml
> INFO  09:50:41 Node
> configuration:[authenticator=AllowAllAuthenticator;
> authorizer=AllowAllAuthorizer; auto_snapshot=true;
> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024;
> cas_contention_timeout_in_ms=1000; client_encryption_options=;
> cluster_name=Test Cluster; column_index_size_in_kb=64;
> commit_failure_policy=stop;
> commitlog_directory=/var/lib/cassandra/commitlog;
> commitlog_segment_size_in_mb=64; commitlog_sync=periodic;
> commitlog_sync_period_in_ms=1; compaction_throughput_mb_per_sec=16;
> concurrent_compactors=4; concurrent_counter_writes=32; 
> concurrent_reads=32;
> concurrent_writes=32; counter_cache_save_period=7200;
> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000;
> cross_node_timeout=false; data_file_directories=[/var/lib/cassandra/data];
> disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1;
> dynamic_snitch_reset_interval_in_ms=60;
> dynamic_snitch_update_interval_in_ms=100;
> endpoint_snitch=GossipingPropertyFileSnitch; hinted_handoff_enabled=true;
> hinted_handoff_throttle_in_kb=1024; incremental_backups=false;
> index_summary_capacity_in_mb=null;
> index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false;
> internode_compression=all; key_cache_save_period=14400;
> key_cache_size_in_mb=null; listen_address=null;
> max_hint_window_in_ms=1080; max_hints_delivery_threads=2;
> memtable_allocation_type=heap_buffers; native_transport_port=9042;
> num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner;
> permissions_validity_in_ms=2000; range_request_timeout_in_ms=100;
> read_request_timeout_in_ms=9;
> request_scheduler=org.apache.cassandra.scheduler.NoScheduler;
> request_timeout_in_ms=9; row_cache_save_period=0;
> row_cache_size_

Re: error='Cannot allocate memory' (errno=12)

2015-05-11 Thread Anishek Agarwal
Well i havent used 2.1.x cassandra neither java 8 but any reason for not
using oracle JDK as i thought thats what is recommended. i saw a thread
earlier stating java 8 with 2.0.14+ cassandra is tested but not sure about
2.1.x versions.


On Mon, May 11, 2015 at 4:04 PM, Rahul Bhardwaj <
rahul.bhard...@indiamart.com> wrote:

> PFA of error log​
>  hs_err_pid9656.log
> 
> ​
>
> On Mon, May 11, 2015 at 3:58 PM, Rahul Bhardwaj <
> rahul.bhard...@indiamart.com> wrote:
>
>> free RAM:
>>
>>
>> free -m
>>  total   used   free sharedbuffers cached
>> Mem: 64398  23753  40644  0108   8324
>> -/+ buffers/cache:  15319  49078
>> Swap: 2925 15   2909
>>
>>
>>  ulimit -a
>> core file size  (blocks, -c) 0
>> data seg size   (kbytes, -d) unlimited
>> scheduling priority (-e) 0
>> file size   (blocks, -f) unlimited
>> pending signals (-i) 515041
>> max locked memory   (kbytes, -l) 64
>> max memory size (kbytes, -m) unlimited
>> open files  (-n) 1024
>> pipe size(512 bytes, -p) 8
>> POSIX message queues (bytes, -q) 819200
>> real-time priority  (-r) 0
>> stack size  (kbytes, -s) 10240
>> cpu time   (seconds, -t) unlimited
>> max user processes  (-u) 515041
>> virtual memory  (kbytes, -v) unlimited
>> file locks  (-x) unlimited
>>
>>
>> Also attaching complete error file
>>
>>
>> On Mon, May 11, 2015 at 3:35 PM, Anishek Agarwal 
>> wrote:
>>
>>> the memory cassandra is trying to allocate is pretty small. you sure
>>> there is no hardware failure on the machine. what is the free ram on the
>>> box ?
>>>
>>> On Mon, May 11, 2015 at 3:28 PM, Rahul Bhardwaj <
>>> rahul.bhard...@indiamart.com> wrote:
>>>
 Hi All,

 We have cluster of 3 nodes with  64GB RAM each. My cluster was running
 in healthy state. Suddenly one machine's cassandra daemon stops working and
 shut down.

 On restarting it after 2 minutes it again stops and is getting stop
 after returning below error in cassandra.log

 Java HotSpot(TM) 64-Bit Server VM warning: INFO:
 os::commit_memory(0x7fd064dc6000, 12288, 0) failed; error='Cannot
 allocate memory' (errno=12)
 #
 # There is insufficient memory for the Java Runtime Environment to
 continue.
 # Native memory allocation (malloc) failed to allocate 12288 bytes for
 committing reserved memory.
 # An error report file with more information is saved as:
 # /tmp/hs_err_pid23215.log
 INFO  09:50:41 Loading settings from
 file:/etc/cassandra/default.conf/cassandra.yaml
 INFO  09:50:41 Node configuration:[authenticator=AllowAllAuthenticator;
 authorizer=AllowAllAuthorizer; auto_snapshot=true;
 batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024;
 cas_contention_timeout_in_ms=1000; client_encryption_options=;
 cluster_name=Test Cluster; column_index_size_in_kb=64;
 commit_failure_policy=stop;
 commitlog_directory=/var/lib/cassandra/commitlog;
 commitlog_segment_size_in_mb=64; commitlog_sync=periodic;
 commitlog_sync_period_in_ms=1; compaction_throughput_mb_per_sec=16;
 concurrent_compactors=4; concurrent_counter_writes=32; concurrent_reads=32;
 concurrent_writes=32; counter_cache_save_period=7200;
 counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000;
 cross_node_timeout=false; data_file_directories=[/var/lib/cassandra/data];
 disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1;
 dynamic_snitch_reset_interval_in_ms=60;
 dynamic_snitch_update_interval_in_ms=100;
 endpoint_snitch=GossipingPropertyFileSnitch; hinted_handoff_enabled=true;
 hinted_handoff_throttle_in_kb=1024; incremental_backups=false;
 index_summary_capacity_in_mb=null;
 index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false;
 internode_compression=all; key_cache_save_period=14400;
 key_cache_size_in_mb=null; listen_address=null;
 max_hint_window_in_ms=1080; max_hints_delivery_threads=2;
 memtable_allocation_type=heap_buffers; native_transport_port=9042;
 num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner;
 permissions_validity_in_ms=2000; range_request_timeout_in_ms=100;
 read_request_timeout_in_ms=9;
 request_scheduler=org.apache.cassandra.scheduler.NoScheduler;
 request_timeout_in_ms=9; row_cache_save_period=0;
 row_cache_size_in_mb=0; rpc_address=null; rpc_keepalive=true;
 rpc_port=9160; rpc_server_type=sync;
 saved_caches_directory=/var/lib/cassandra/saved_caches;
 seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider,
 parameters=

Re: error='Cannot allocate memory' (errno=12)

2015-05-11 Thread Rahul Bhardwaj
PFA of error log​
 hs_err_pid9656.log

​

On Mon, May 11, 2015 at 3:58 PM, Rahul Bhardwaj <
rahul.bhard...@indiamart.com> wrote:

> free RAM:
>
>
> free -m
>  total   used   free sharedbuffers cached
> Mem: 64398  23753  40644  0108   8324
> -/+ buffers/cache:  15319  49078
> Swap: 2925 15   2909
>
>
>  ulimit -a
> core file size  (blocks, -c) 0
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 515041
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 10240
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 515041
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>
>
> Also attaching complete error file
>
>
> On Mon, May 11, 2015 at 3:35 PM, Anishek Agarwal 
> wrote:
>
>> the memory cassandra is trying to allocate is pretty small. you sure
>> there is no hardware failure on the machine. what is the free ram on the
>> box ?
>>
>> On Mon, May 11, 2015 at 3:28 PM, Rahul Bhardwaj <
>> rahul.bhard...@indiamart.com> wrote:
>>
>>> Hi All,
>>>
>>> We have cluster of 3 nodes with  64GB RAM each. My cluster was running
>>> in healthy state. Suddenly one machine's cassandra daemon stops working and
>>> shut down.
>>>
>>> On restarting it after 2 minutes it again stops and is getting stop
>>> after returning below error in cassandra.log
>>>
>>> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
>>> os::commit_memory(0x7fd064dc6000, 12288, 0) failed; error='Cannot
>>> allocate memory' (errno=12)
>>> #
>>> # There is insufficient memory for the Java Runtime Environment to
>>> continue.
>>> # Native memory allocation (malloc) failed to allocate 12288 bytes for
>>> committing reserved memory.
>>> # An error report file with more information is saved as:
>>> # /tmp/hs_err_pid23215.log
>>> INFO  09:50:41 Loading settings from
>>> file:/etc/cassandra/default.conf/cassandra.yaml
>>> INFO  09:50:41 Node configuration:[authenticator=AllowAllAuthenticator;
>>> authorizer=AllowAllAuthorizer; auto_snapshot=true;
>>> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024;
>>> cas_contention_timeout_in_ms=1000; client_encryption_options=;
>>> cluster_name=Test Cluster; column_index_size_in_kb=64;
>>> commit_failure_policy=stop;
>>> commitlog_directory=/var/lib/cassandra/commitlog;
>>> commitlog_segment_size_in_mb=64; commitlog_sync=periodic;
>>> commitlog_sync_period_in_ms=1; compaction_throughput_mb_per_sec=16;
>>> concurrent_compactors=4; concurrent_counter_writes=32; concurrent_reads=32;
>>> concurrent_writes=32; counter_cache_save_period=7200;
>>> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000;
>>> cross_node_timeout=false; data_file_directories=[/var/lib/cassandra/data];
>>> disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1;
>>> dynamic_snitch_reset_interval_in_ms=60;
>>> dynamic_snitch_update_interval_in_ms=100;
>>> endpoint_snitch=GossipingPropertyFileSnitch; hinted_handoff_enabled=true;
>>> hinted_handoff_throttle_in_kb=1024; incremental_backups=false;
>>> index_summary_capacity_in_mb=null;
>>> index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false;
>>> internode_compression=all; key_cache_save_period=14400;
>>> key_cache_size_in_mb=null; listen_address=null;
>>> max_hint_window_in_ms=1080; max_hints_delivery_threads=2;
>>> memtable_allocation_type=heap_buffers; native_transport_port=9042;
>>> num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner;
>>> permissions_validity_in_ms=2000; range_request_timeout_in_ms=100;
>>> read_request_timeout_in_ms=9;
>>> request_scheduler=org.apache.cassandra.scheduler.NoScheduler;
>>> request_timeout_in_ms=9; row_cache_save_period=0;
>>> row_cache_size_in_mb=0; rpc_address=null; rpc_keepalive=true;
>>> rpc_port=9160; rpc_server_type=sync;
>>> saved_caches_directory=/var/lib/cassandra/saved_caches;
>>> seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider,
>>> parameters=[{seeds=206.191.151.199}]}];
>>> server_encryption_options=; snapshot_before_compaction=false;
>>> ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50;
>>> start_native_transport=true; start_rpc=true; storage_port=7000;
>>> thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=10;
>>> tombstone_warn_threshold=1000; trickle_fsync=false;
>>> trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_m

Re: error='Cannot allocate memory' (errno=12)

2015-05-11 Thread Rahul Bhardwaj
free RAM:


free -m
 total   used   free sharedbuffers cached
Mem: 64398  23753  40644  0108   8324
-/+ buffers/cache:  15319  49078
Swap: 2925 15   2909


 ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 515041
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 515041
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited


Also attaching complete error file


On Mon, May 11, 2015 at 3:35 PM, Anishek Agarwal  wrote:

> the memory cassandra is trying to allocate is pretty small. you sure there
> is no hardware failure on the machine. what is the free ram on the box ?
>
> On Mon, May 11, 2015 at 3:28 PM, Rahul Bhardwaj <
> rahul.bhard...@indiamart.com> wrote:
>
>> Hi All,
>>
>> We have cluster of 3 nodes with  64GB RAM each. My cluster was running in
>> healthy state. Suddenly one machine's cassandra daemon stops working and
>> shut down.
>>
>> On restarting it after 2 minutes it again stops and is getting stop after
>> returning below error in cassandra.log
>>
>> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
>> os::commit_memory(0x7fd064dc6000, 12288, 0) failed; error='Cannot
>> allocate memory' (errno=12)
>> #
>> # There is insufficient memory for the Java Runtime Environment to
>> continue.
>> # Native memory allocation (malloc) failed to allocate 12288 bytes for
>> committing reserved memory.
>> # An error report file with more information is saved as:
>> # /tmp/hs_err_pid23215.log
>> INFO  09:50:41 Loading settings from
>> file:/etc/cassandra/default.conf/cassandra.yaml
>> INFO  09:50:41 Node configuration:[authenticator=AllowAllAuthenticator;
>> authorizer=AllowAllAuthorizer; auto_snapshot=true;
>> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024;
>> cas_contention_timeout_in_ms=1000; client_encryption_options=;
>> cluster_name=Test Cluster; column_index_size_in_kb=64;
>> commit_failure_policy=stop;
>> commitlog_directory=/var/lib/cassandra/commitlog;
>> commitlog_segment_size_in_mb=64; commitlog_sync=periodic;
>> commitlog_sync_period_in_ms=1; compaction_throughput_mb_per_sec=16;
>> concurrent_compactors=4; concurrent_counter_writes=32; concurrent_reads=32;
>> concurrent_writes=32; counter_cache_save_period=7200;
>> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000;
>> cross_node_timeout=false; data_file_directories=[/var/lib/cassandra/data];
>> disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1;
>> dynamic_snitch_reset_interval_in_ms=60;
>> dynamic_snitch_update_interval_in_ms=100;
>> endpoint_snitch=GossipingPropertyFileSnitch; hinted_handoff_enabled=true;
>> hinted_handoff_throttle_in_kb=1024; incremental_backups=false;
>> index_summary_capacity_in_mb=null;
>> index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false;
>> internode_compression=all; key_cache_save_period=14400;
>> key_cache_size_in_mb=null; listen_address=null;
>> max_hint_window_in_ms=1080; max_hints_delivery_threads=2;
>> memtable_allocation_type=heap_buffers; native_transport_port=9042;
>> num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner;
>> permissions_validity_in_ms=2000; range_request_timeout_in_ms=100;
>> read_request_timeout_in_ms=9;
>> request_scheduler=org.apache.cassandra.scheduler.NoScheduler;
>> request_timeout_in_ms=9; row_cache_save_period=0;
>> row_cache_size_in_mb=0; rpc_address=null; rpc_keepalive=true;
>> rpc_port=9160; rpc_server_type=sync;
>> saved_caches_directory=/var/lib/cassandra/saved_caches;
>> seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider,
>> parameters=[{seeds=206.191.151.199}]}];
>> server_encryption_options=; snapshot_before_compaction=false;
>> ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50;
>> start_native_transport=true; start_rpc=true; storage_port=7000;
>> thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=10;
>> tombstone_warn_threshold=1000; trickle_fsync=false;
>> trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=6;
>> write_request_timeout_in_ms=9]
>> ERROR 09:50:41 Exception encountered during startup
>> java.lang.OutOfMemoryError: unable to create new native thread
>> at java.lang.Thread.start0(Native Method) ~[na:1.7.0_60]
>> at java.lang.Thread.start(Thread.java:714) ~[na:1.7.0_60]
>> at
>> java.util.concurrent.ThreadPool

Re: error='Cannot allocate memory' (errno=12)

2015-05-11 Thread Anishek Agarwal
the memory cassandra is trying to allocate is pretty small. you sure there
is no hardware failure on the machine. what is the free ram on the box ?

On Mon, May 11, 2015 at 3:28 PM, Rahul Bhardwaj <
rahul.bhard...@indiamart.com> wrote:

> Hi All,
>
> We have cluster of 3 nodes with  64GB RAM each. My cluster was running in
> healthy state. Suddenly one machine's cassandra daemon stops working and
> shut down.
>
> On restarting it after 2 minutes it again stops and is getting stop after
> returning below error in cassandra.log
>
> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
> os::commit_memory(0x7fd064dc6000, 12288, 0) failed; error='Cannot
> allocate memory' (errno=12)
> #
> # There is insufficient memory for the Java Runtime Environment to
> continue.
> # Native memory allocation (malloc) failed to allocate 12288 bytes for
> committing reserved memory.
> # An error report file with more information is saved as:
> # /tmp/hs_err_pid23215.log
> INFO  09:50:41 Loading settings from
> file:/etc/cassandra/default.conf/cassandra.yaml
> INFO  09:50:41 Node configuration:[authenticator=AllowAllAuthenticator;
> authorizer=AllowAllAuthorizer; auto_snapshot=true;
> batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024;
> cas_contention_timeout_in_ms=1000; client_encryption_options=;
> cluster_name=Test Cluster; column_index_size_in_kb=64;
> commit_failure_policy=stop;
> commitlog_directory=/var/lib/cassandra/commitlog;
> commitlog_segment_size_in_mb=64; commitlog_sync=periodic;
> commitlog_sync_period_in_ms=1; compaction_throughput_mb_per_sec=16;
> concurrent_compactors=4; concurrent_counter_writes=32; concurrent_reads=32;
> concurrent_writes=32; counter_cache_save_period=7200;
> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000;
> cross_node_timeout=false; data_file_directories=[/var/lib/cassandra/data];
> disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1;
> dynamic_snitch_reset_interval_in_ms=60;
> dynamic_snitch_update_interval_in_ms=100;
> endpoint_snitch=GossipingPropertyFileSnitch; hinted_handoff_enabled=true;
> hinted_handoff_throttle_in_kb=1024; incremental_backups=false;
> index_summary_capacity_in_mb=null;
> index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false;
> internode_compression=all; key_cache_save_period=14400;
> key_cache_size_in_mb=null; listen_address=null;
> max_hint_window_in_ms=1080; max_hints_delivery_threads=2;
> memtable_allocation_type=heap_buffers; native_transport_port=9042;
> num_tokens=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner;
> permissions_validity_in_ms=2000; range_request_timeout_in_ms=100;
> read_request_timeout_in_ms=9;
> request_scheduler=org.apache.cassandra.scheduler.NoScheduler;
> request_timeout_in_ms=9; row_cache_save_period=0;
> row_cache_size_in_mb=0; rpc_address=null; rpc_keepalive=true;
> rpc_port=9160; rpc_server_type=sync;
> saved_caches_directory=/var/lib/cassandra/saved_caches;
> seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider,
> parameters=[{seeds=206.191.151.199}]}];
> server_encryption_options=; snapshot_before_compaction=false;
> ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50;
> start_native_transport=true; start_rpc=true; storage_port=7000;
> thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=10;
> tombstone_warn_threshold=1000; trickle_fsync=false;
> trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=6;
> write_request_timeout_in_ms=9]
> ERROR 09:50:41 Exception encountered during startup
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method) ~[na:1.7.0_60]
> at java.lang.Thread.start(Thread.java:714) ~[na:1.7.0_60]
> at
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
> ~[na:1.7.0_60]
> at
> java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1590)
> ~[na:1.7.0_60]
> at
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:333)
> ~[na:1.7.0_60]
> at
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:594)
> ~[na:1.7.0_60]
> at
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:61)
> ~[apache-cassandra-2.1.2.jar:2.1.2-SNAPSHOT]
> at org.apache.cassandra.gms.Gossiper.start(Gossiper.java:1188)
> ~[apache-cassandra-2.1.2.jar:2.1.2-SNAPSHOT]
> at
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:721)
> ~[apache-cassandra-2.1.2.jar:2.1.2-SNAPSHOT]
> at
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:643)
> ~[apache-cassandra-2.1.2.jar:2.1.2-SNAPSHOT]
> at
> org.apache.cassandra.service.StorageService.initSer