Re: Limitations of Hinted Handoff OverloadedException exception

2018-07-12 Thread Karthick V
Refs :
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesHintedHandoff.html

On Thu, Jul 12, 2018 at 7:46 PM Karthick V  wrote:

> Hi everyone,
>
>  If several nodes experience brief outages simultaneously, substantial
>> memory pressure can build up on the coordinator.* The coordinator tracks
>> how many hints it is currently writing, and if the number increases too
>> much, the coordinator refuses writes and throws the *
>> *OverloadedException exception.*
>
>
>  In the above statement, it is been said that after some extent(of
> hints) the* coordinator *will refuse to writes. can someone explain the
> depth of this limitations and its dependency if any (like disk size or any)?
>
> Regards
> Karthick V
>
>
>


Clarification needed on how triggers execute on batch mutations

2018-07-12 Thread Rahul Singh
Folks,

I have a question regarding how mutations from batch statements trigger
'TRIGGERS'

In unlogged batch, in a single partition mutation, I'm expecting one
partition to be affected and returned.. but does it trigger for each and
every row? In logged batch, in a single partition, I'm expecting the same
as above.

In logged batch, multi-partition mutation, I'm expecting multiple triggers
, one per partition mutated regardless of replicas (as in a single
partition mutation). In an unlogged batch, multi-partition mutation. I'm
expecting the same.

Since the coordinator does the write management , I am expecting that
regardless of whether I'm doing a logged or unlogged batch, the trigger on
any given table will only be triggered once per mutated partition.

Is my assumption correct?

Rahul Singh

Chief Executive Officer | Internet Architecture

https://www.anant.us/datastax


m 202.905.2818 | View my profile  | Team
Office Hours  | Appointment Calendar


1010 Wisconsin Ave NW, Suite 250

Washington, D.C. 20007

To empower people through the Internet to create a better world.

How are we doing? Please take our survey.


This email and any attachments to it may be confidential and are intended
solely for the use of the individual to whom it is addressed. Any views or
opinions expressed are solely those of the author and do not necessarily
represent those of Anant Corporation. If you are not the intended recipient
of this email, you must neither take any action based upon its contents,
nor copy or show it to anyone. Please contact the sender if you believe you
have received this email in error.


Best approach for node decommission

2018-07-12 Thread rajasekhar kommineni
Hi All,

Can anybody let me know best approach for decommissiong a node in the cluster. 
My cluster is using vnodes, is there any way to verify all the data of the 
decommissioning node has been moved to remaining nodes, before completely 
shutting down the server.

I followed below procedure :

1) nodetool flush
2) nodetool repair
3) nodetool decommission

The aggregate of Load before node3 decommission is 1411.47 and after is 
1380.15. Can I ignore the size difference and treat all the data of node3 has 
been moved to other nodes.

I am looking for good data validation process with out depending on Application 
team for verification.

Total load : 1411.47

– Address Load Tokens Owns Host ID Rack
UN node1 220.48 MiB 256 ? ff09b08b-29c1-4365-a3b7-1eea51f7d575 rack1
UN node2 216.53 MiB 256 ? 4b565a31-4c77-418f-a47f-5e0eb2ec5624 rack1
UN node3 64.52  MiB 256 ? 12b29812-cc60-456c-95a9-0e339c249bc8 rack1
UN node4 195.84 MiB 256 ? 0424a882-de4f-4e6a-b642-6ce9f4621e04 rack1
UN node5 179.07 MiB 256 ? 2f291a2e-b10d-4364-8192-13e107a9c322 rack1
UN node6 213.75 MiB 256 ? cf10166b-cfae-44fd-8bca-f55a4f9ef491 rack1
UN node7 158.54 MiB 256 ? ef8454c7-3005-487a-a3d4-e0065edfd99f rack1
UN node8 162.74 MiB 256 ? 7d786e46-1c11-485c-a943-bbcca6729ae1 rack1

Total Load : 1380.15

– Address Load Tokens Owns Host ID Rack
UN node1 229.04 MiB 256 ? ff09b08b-29c1-4365-a3b7-1eea51f7d575 rack1
UN node2 225.52 MiB 256 ? 4b565a31-4c77-418f-a47f-5e0eb2ec5624 rack1
UN node4 195.84 MiB 256 ? 0424a882-de4f-4e6a-b642-6ce9f4621e04 rack1
UN node5 179.07 MiB 256 ? 2f291a2e-b10d-4364-8192-13e107a9c322 rack1
UN node6 229.4  MiB 256 ? cf10166b-cfae-44fd-8bca-f55a4f9ef491 rack1
UN node7 158.54 MiB 256 ? ef8454c7-3005-487a-a3d4-e0065edfd99f rack1
UN node8 162.74 MiB 256 ? 7d786e46-1c11-485c-a943-bbcca6729ae1 rack1

Thanks,



Re: cassandra cluser sizing

2018-07-12 Thread Jeff Jirsa
You can certainly go higher than a terabyte - 4 or so is common, Ive heard of 
people doing up to 12 tb with the awareness that time to replace scales with 
size on disk, so a very large host will take longer to rebuild than a small host

The 50% free guidance only applies to size tiered compaction, and given your 
throughput you may prefer leveled compaction anyway. With leveled you should 
target 30% free for compaction and repair

You don’t need more than one Cassandra instance per host for 4tb but you may 
want to consider it for more than that - multiple instances are especially 
useful if you have multiple (lots of) disks and are running Cassandra before 
CASSANDRA-6696 (which made jbod safer).

-- 
Jeff Jirsa


> On Jul 12, 2018, at 7:37 AM, Vitaliy Semochkin  wrote:
> 
> Hi,
> 
> Which amount of data Cassandra 3 server in a cluster can serve at max?
> The documentation says it is only 1TB.
> If the load is not high (only about 100 requests per second with 1kb
> of data each) is it safe to go above 1TB size (let's say 5TB per
> server)?
> What will be safe maximum disk size a server in such cluster can serve?
> 
> Documentation also says that  compaction  requires to have %50 of disk
> occupied space. In case I don't have update operations (only insert)
> do I need that much extra space for compaction?
> 
> In articles (outside Datastax docs) I read that it is a common
> practice to launch more than one Cassandra server on one physical
> server in order to be able use more than 1TB of hard driver per
> server, is it recommended?
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
> 

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



cassandra cluser sizing

2018-07-12 Thread Vitaliy Semochkin
Hi,

Which amount of data Cassandra 3 server in a cluster can serve at max?
The documentation says it is only 1TB.
If the load is not high (only about 100 requests per second with 1kb
of data each) is it safe to go above 1TB size (let's say 5TB per
server)?
What will be safe maximum disk size a server in such cluster can serve?

Documentation also says that  compaction  requires to have %50 of disk
occupied space. In case I don't have update operations (only insert)
do I need that much extra space for compaction?

In articles (outside Datastax docs) I read that it is a common
practice to launch more than one Cassandra server on one physical
server in order to be able use more than 1TB of hard driver per
server, is it recommended?

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Limitations of Hinted Handoff OverloadedException exception

2018-07-12 Thread Karthick V
Hi everyone,

 If several nodes experience brief outages simultaneously, substantial
> memory pressure can build up on the coordinator.* The coordinator tracks
> how many hints it is currently writing, and if the number increases too
> much, the coordinator refuses writes and throws the *
> *OverloadedException exception.*


 In the above statement, it is been said that after some extent(of
hints) the* coordinator *will refuse to writes. can someone explain the
depth of this limitations and its dependency if any (like disk size or any)?

Regards
Karthick V


Re: Compaction out of memory

2018-07-12 Thread Jeff Jirsa
Probably close - maybe file handles or map counts. ulimit -a and/or 

cat /proc/sys/vm/max_map_count

Would be useful 

-- 
Jeff Jirsa


> On Jul 12, 2018, at 3:47 AM, Hannu Kröger  wrote:
> 
> Could the problem be that the process ran out of file handles? Recommendation 
> is to tune that higher than the default. 
> 
> Hannu
> 
>> onmstester onmstester  kirjoitti 12.7.2018 kello 12.44:
>> 
>> Cassandra crashed in Two out of 10 nodes in my cluster within 1 day, the 
>> error is:
>> 
>> ERROR [CompactionExecutor:3389] 2018-07-10 11:27:58,857 
>> CassandraDaemon.java:228 - Exception in thread 
>> Thread[CompactionExecutor:3389,1,main]
>> org.apache.cassandra.io.FSReadError: java.io.IOException: Map failed
>> at 
>> org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:157) 
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:310)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:246)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:170)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:73) 
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:61) 
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:104) 
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.io.util.FileHandle$Builder.complete(FileHandle.java:362)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:290)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:179)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:134)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:65)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:142)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:201)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:275)
>>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>> at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
>> ~[na:1.8.0_65]
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
>> ~[na:1.8.0_65]
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>  ~[na:1.8.0_65]
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>  [na:1.8.0_65]
>> at 
>> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>>  [apache-cassandra-3.11.2.jar:3.11.2]
>> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_65]
>> Caused by: java.io.IOException: Map failed
>> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:939) 
>> ~[na:1.8.0_65]
>> at 
>> org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:153) 
>> ~[apache-cassandra-3.11.2.jar:3.11.2]
>> ... 23 common frames omitted
>> Caused by: java.lang.OutOfMemoryError: Map failed
>> at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_65]
>> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:936) 
>> ~[na:1.8.0_65]
>> ... 24 common frames omitted
>> 
>> Each node has 128 GB ram which 32 GB allocated as Cassandra Heap. 
>> Sent using Zoho Mail
>> 
>> 
>> 


Re: default_time_to_live vs TTL on insert statement

2018-07-12 Thread Nitan Kainth
Okay so it means regular update and any ttl set with write overrides default 
setting. Which means datastax documentation is incorrect and should be updated.

Sent from my iPhone

> On Jul 12, 2018, at 9:35 AM, DuyHai Doan  wrote:
> 
> To set TTL on a column only and not on the whole CQL row, use UPDATE
> instead:
> 
> UPDATE  USING TTL xxx SET = WHERE partition=yyy
> 
>> On Thu, Jul 12, 2018 at 2:42 PM, Nitan Kainth  wrote:
>> 
>> Kurt,
>> 
>> It is same mentioned on apache docuemtation too, I am not able to f

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: default_time_to_live vs TTL on insert statement

2018-07-12 Thread DuyHai Doan
To set TTL on a column only and not on the whole CQL row, use UPDATE
instead:

UPDATE  USING TTL xxx SET = WHERE partition=yyy

On Thu, Jul 12, 2018 at 2:42 PM, Nitan Kainth  wrote:

> Kurt,
>
> It is same mentioned on apache docuemtation too, I am not able to find it
> right now.
>
> But my question is:
> How to set TTL for a whole column?
>
> On Wed, Jul 11, 2018 at 11:36 PM, kurt greaves 
> wrote:
>
>> The Datastax documentation is wrong. It won't error, and it shouldn't. If
>> you want to fix that documentation I suggest contacting Datastax.
>>
>> On 11 July 2018 at 19:56, Nitan Kainth  wrote:
>>
>>> Hi DuyHai,
>>>
>>> Could you please explain in what case C* will error based on documented
>>> statement:
>>>
>>> You can set a default TTL for an entire table by setting the table's
>>> default_time_to_live
>>> 
>>>  property. If you try to set a TTL for a specific column that is longer
>>> than the time defined by the table TTL, Cassandra returns an error.
>>>
>>>
>>>
>>> On Wed, Jul 11, 2018 at 2:34 PM, DuyHai Doan 
>>> wrote:
>>>
 default_time_to_live
 
  property applies if you don't specify any TTL on your CQL statement

 However you can always override the default_time_to_live
 
  property by specifying a custom value for each CQL statement

 The behavior is correct, nothing wrong here

 On Wed, Jul 11, 2018 at 7:31 PM, Nitan Kainth 
 wrote:

> Hi,
>
> As per document: https://docs.datastax.com/en/cql/3.3/cql/cql_using
> /useExpireExample.html
>
>
>-
>
>You can set a default TTL for an entire table by setting the
>table's default_time_to_live
>
> 
> property. If you try to set a TTL for a specific column that is
>longer than the time defined by the table TTL, Cassandra returns an 
> error.
>
>
> When I tried to test this statement, i found, we can insert data with
> TTL greater than default_time_to_live. Is the document needs correction, 
> or
> am I mis-understanding it?
>
> CREATE TABLE test (
>
> name text PRIMARY KEY,
>
> description text
>
> ) WITH bloom_filter_fp_chance = 0.01
>
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
>
> AND comment = ''
>
> AND compaction = {'class': 'org.apache.cassandra.db.compa
> ction.SizeTieredCompactionStrategy', 'max_threshold': '32',
> 'min_threshold': '4'}
>
> AND compression = {'chunk_length_in_kb': '64', 'class': '
> org.apache.cassandra.io.compress.LZ4Compressor'}
>
> AND crc_check_chance = 1.0
>
> AND dclocal_read_repair_chance = 0.1
>
> AND default_time_to_live = 240
>
> AND gc_grace_seconds = 864000
>
> AND max_index_interval = 2048
>
> AND memtable_flush_period_in_ms = 0
>
> AND min_index_interval = 128
>
> AND read_repair_chance = 0.0
>
> AND speculative_retry = '99PERCENTILE';
>
> insert into test (name, description) values ('name5', 'name
> description5') using ttl 360;
>
> select * from test ;
>
>
>  name  | description
>
> ---+---
>
>  name5 | name description5
>
>
> SELECT TTL (description) from test;
>
>
>  ttl(description)
>
> --
>
>  351
>
> Can someone please clear this for me?
>
>
>
>
>
>

>>>
>>
>


Re: default_time_to_live vs TTL on insert statement

2018-07-12 Thread Nitan Kainth
Kurt,

It is same mentioned on apache docuemtation too, I am not able to find it
right now.

But my question is:
How to set TTL for a whole column?

On Wed, Jul 11, 2018 at 11:36 PM, kurt greaves  wrote:

> The Datastax documentation is wrong. It won't error, and it shouldn't. If
> you want to fix that documentation I suggest contacting Datastax.
>
> On 11 July 2018 at 19:56, Nitan Kainth  wrote:
>
>> Hi DuyHai,
>>
>> Could you please explain in what case C* will error based on documented
>> statement:
>>
>> You can set a default TTL for an entire table by setting the table's
>> default_time_to_live
>> 
>>  property. If you try to set a TTL for a specific column that is longer
>> than the time defined by the table TTL, Cassandra returns an error.
>>
>>
>>
>> On Wed, Jul 11, 2018 at 2:34 PM, DuyHai Doan 
>> wrote:
>>
>>> default_time_to_live
>>> 
>>>  property applies if you don't specify any TTL on your CQL statement
>>>
>>> However you can always override the default_time_to_live
>>> 
>>>  property by specifying a custom value for each CQL statement
>>>
>>> The behavior is correct, nothing wrong here
>>>
>>> On Wed, Jul 11, 2018 at 7:31 PM, Nitan Kainth 
>>> wrote:
>>>
 Hi,

 As per document: https://docs.datastax.com/en/cql/3.3/cql/cql_using
 /useExpireExample.html


-

You can set a default TTL for an entire table by setting the table's
 default_time_to_live

 
 property. If you try to set a TTL for a specific column that is
longer than the time defined by the table TTL, Cassandra returns an 
 error.


 When I tried to test this statement, i found, we can insert data with
 TTL greater than default_time_to_live. Is the document needs correction, or
 am I mis-understanding it?

 CREATE TABLE test (

 name text PRIMARY KEY,

 description text

 ) WITH bloom_filter_fp_chance = 0.01

 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}

 AND comment = ''

 AND compaction = {'class': 'org.apache.cassandra.db.compa
 ction.SizeTieredCompactionStrategy', 'max_threshold': '32',
 'min_threshold': '4'}

 AND compression = {'chunk_length_in_kb': '64', 'class': '
 org.apache.cassandra.io.compress.LZ4Compressor'}

 AND crc_check_chance = 1.0

 AND dclocal_read_repair_chance = 0.1

 AND default_time_to_live = 240

 AND gc_grace_seconds = 864000

 AND max_index_interval = 2048

 AND memtable_flush_period_in_ms = 0

 AND min_index_interval = 128

 AND read_repair_chance = 0.0

 AND speculative_retry = '99PERCENTILE';

 insert into test (name, description) values ('name5', 'name
 description5') using ttl 360;

 select * from test ;


  name  | description

 ---+---

  name5 | name description5


 SELECT TTL (description) from test;


  ttl(description)

 --

  351

 Can someone please clear this for me?






>>>
>>
>


Re: Compaction out of memory

2018-07-12 Thread Hannu Kröger
Could the problem be that the process ran out of file handles? Recommendation 
is to tune that higher than the default. 

Hannu

> onmstester onmstester  kirjoitti 12.7.2018 kello 12.44:
> 
> Cassandra crashed in Two out of 10 nodes in my cluster within 1 day, the 
> error is:
> 
> ERROR [CompactionExecutor:3389] 2018-07-10 11:27:58,857 
> CassandraDaemon.java:228 - Exception in thread 
> Thread[CompactionExecutor:3389,1,main]
> org.apache.cassandra.io.FSReadError: java.io.IOException: Map failed
> at 
> org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:157) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:310)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:246)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:170)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:73) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:61) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:104) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.io.util.FileHandle$Builder.complete(FileHandle.java:362) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:290)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:179)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:134)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:65)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:142)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:201)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:275)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_65]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_65]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_65]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [apache-cassandra-3.11.2.jar:3.11.2]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_65]
> Caused by: java.io.IOException: Map failed
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:939) 
> ~[na:1.8.0_65]
> at 
> org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:153) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
> ... 23 common frames omitted
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_65]
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:936) 
> ~[na:1.8.0_65]
> ... 24 common frames omitted
> 
> Each node has 128 GB ram which 32 GB allocated as Cassandra Heap. 
> Sent using Zoho Mail
> 
> 
> 


Compaction out of memory

2018-07-12 Thread onmstester onmstester
Cassandra crashed in Two out of 10 nodes in my cluster within 1 day, the error 
is: ERROR [CompactionExecutor:3389] 2018-07-10 11:27:58,857 
CassandraDaemon.java:228 - Exception in thread 
Thread[CompactionExecutor:3389,1,main] org.apache.cassandra.io.FSReadError: 
java.io.IOException: Map failed     at 
org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:157) 
~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:310) 
~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:246)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:170)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:73) 
~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:61) 
~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:104) 
~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.io.util.FileHandle$Builder.complete(FileHandle.java:362) 
~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:290)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:179)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:134)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:65)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:142)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:201)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:275)
 ~[apache-cassandra-3.11.2.jar:3.11.2]     at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_65]     at 
java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_65]     
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_65]     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_65]     at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
 [apache-cassandra-3.11.2.jar:3.11.2]     at 
java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_65] Caused by: 
java.io.IOException: Map failed     at 
sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:939) ~[na:1.8.0_65]     
at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:153) 
~[apache-cassandra-3.11.2.jar:3.11.2]     ... 23 common frames omitted 
Caused by: java.lang.OutOfMemoryError: Map failed     at 
sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_65]     at 
sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:936) ~[na:1.8.0_65]     
... 24 common frames omitted Each node has 128 GB ram which 32 GB allocated as 
Cassandra Heap.  Sent using Zoho Mail