Hi Anuj

Yes, thanks.. looking at my log file I see:

ERROR [SharedPool-Worker-2] 2015-03-24 13:52:06,751
SliceQueryFilter.java:218 - Scanned over 100000 tombstones in test1.msg;
query aborted (see tombstone_\
failure_threshold)
WARN  [SharedPool-Worker-2] 2015-03-24 13:52:06,759
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread
Thread[SharedPool-Worker-2\
,5,main]: {}
java.lang.RuntimeException:
org.apache.cassandra.db.filter.TombstoneOverwhelmingException

I'm reading up about how to deal with this now, thanks..

On 24 March 2015 at 13:16, Anuj Wadehra <anujw_2...@yahoo.co.in> wrote:

> Hi Joss
>
> We faced similar issue recently. The problem seems to be related to huge
> number of tombstones generated after deletion. I would suggest you to
> increase tombstone warning and failure threshold in cassandra.yaml.
>
> Once you do that and run your program make sure that you monitor Cassandra
> Heap usage using nodetool info command. If heap is near full Cassandra
> halts are obvious. So you need to increase heap.
>
> Due to increased tombstones your query is unable to complete within short
> time..I would suggest increasing read timeout in cassandra.yaml so that
> query may complete.
>
> Please look at your logs to make sure that there are no exception.
>
> Thanks
> Anuj Wadehra
>
>
> ------------------------------
>   *From*:"joss Earl" <j...@rareformnewmedia.com>
> *Date*:Tue, 24 Mar, 2015 at 6:17 pm
> *Subject*:Re: error deleting messages
>
> It inserts 100,000 messages, I then start deleting the messages by
> grabbing chunks of 100 at a time and then individually deleting each
> message.
>
> So, the 100,000 messages get inserted without any trouble, I run into
> trouble when I have deleted about half of them. I've run this on machines
> with 4,8, and 16gig of ram and behaviour was consistent (I fail after 50000
> or so messages on that table, or maybe 30,000 messages on a table with more
> columns).
>
>
>
> On 24 March 2015 at 12:35, Ali Akhtar <ali.rac...@gmail.com> wrote:
>
>> 50100 inserts or deletes? also how much ram / cpu do you have on the
>> server running this, and what's the ram / cpu usage at about the time it
>> fails?
>>
>> On Tue, Mar 24, 2015 at 5:29 PM, joss Earl <j...@rareformnewmedia.com>
>> wrote:
>>
>>> on a stock install, it gets to about 50100 before grinding to a halt
>>>
>>>
>>>
>>> On 24 March 2015 at 12:19, Ali Akhtar <ali.rac...@gmail.com> wrote:
>>>
>>>> What happens when you run it? How far does it get before stopping?
>>>>
>>>> On Tue, Mar 24, 2015 at 5:13 PM, joss Earl <j...@rareformnewmedia.com>
>>>> wrote:
>>>>
>>>>> sure: https://gist.github.com/joss75321/7d85e4c75c06530e9d80
>>>>>
>>>>> On 24 March 2015 at 12:04, Ali Akhtar <ali.rac...@gmail.com> wrote:
>>>>>
>>>>>> Can you put your code on gist.github.com or pastebin?
>>>>>>
>>>>>> On Tue, Mar 24, 2015 at 4:58 PM, joss Earl <j...@rareformnewmedia.com
>>>>>> > wrote:
>>>>>>
>>>>>>> I run into trouble after a while if I delete rows, this happens in
>>>>>>> both 2.1.3 and 2.0.13, and I encountered the same problem when using 
>>>>>>> either
>>>>>>> the datastax java driver or the stock python driver.
>>>>>>> The problem is reproducible using the attached python program.
>>>>>>>
>>>>>>> Once the problem is encountered, the table becomes unusable..
>>>>>>>
>>>>>>> cqlsh:test1> select id from msg limit 1;
>>>>>>> Request did not complete within rpc_timeout.
>>>>>>>
>>>>>>> So, questions are:
>>>>>>> am I doing something wrong ?
>>>>>>> is this expected behaviour ?
>>>>>>> is there some way to fix the table and make it usable again once
>>>>>>> this has happened ?
>>>>>>> if this is a bug, what is the best way of reporting it ?
>>>>>>>
>>>>>>> Many thanks
>>>>>>> Joss
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to