[ 
https://issues.apache.org/jira/browse/CASSANDRA-7810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Halliday reopened CASSANDRA-7810:
------------------------------------------


sorry guys, still seeing this intermittently even after an env cleanup. I 
suspect it's linked to the merging of sstables. If I flush the data and 
tombstones together to a single sstable and compact then it's fine. If I flush 
the data and the tombstones separately such that I have two sstables, then 
compact them, it goes wrong. Any chance someone could try that modified process 
and see if it's reproducible? thx.

> tombstones gc'd before being locally applied
> --------------------------------------------
>
>                 Key: CASSANDRA-7810
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7810
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: 2.1.0.rc6
>            Reporter: Jonathan Halliday
>            Assignee: Marcus Eriksson
>             Fix For: 2.1.0
>
>         Attachments: range_tombstone_test.py
>
>
> # single node environment
> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1 };
> use test;
> create table foo (a int, b int, primary key(a,b));
> alter table foo with gc_grace_seconds = 0;
> insert into foo (a,b) values (1,2);
> select * from foo;
> -- one row returned. so far, so good.
> delete from foo where a=1 and b=2;
> select * from foo;
> -- 0 rows. still rainbows and kittens.
> bin/nodetool flush;
> bin/nodetool compact;
> select * from foo;
>  a | b
> ---+---
>  1 | 2
> (1 rows)
> gahhh.
> looks like the tombstones were considered obsolete and thrown away before 
> being applied to the compaction?  gc_grace just means the interval after 
> which they won't be available to remote nodes repair - they should still 
> apply locally regardless (and do correctly in 2.0.9)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to