Hi Leo,

The overlapping SSTables are indeed the most probable cause as suggested by
Jeff.
Do you know if the tombstone compactions actually triggered? (did the
SSTables name change?)

Could you run the following command to list SSTables and provide us the
output? It will display both their timestamp ranges along with the
estimated droppable tombstones ratio.


for f in *Data.db; do meta=$(sstablemetadata -gc_grace_seconds 259200 $f);
echo $(date --date=@$(echo "$meta" | grep Maximum\ time | cut -d" "  -f3|
cut -c 1-10) '+%m/%d/%Y %H:%M:%S') $(date --date=@$(echo "$meta" | grep
Minimum\ time | cut -d" "  -f3| cut -c 1-10) '+%m/%d/%Y %H:%M:%S') $(echo
"$meta" | grep droppable) $(ls -lh $f); done | sort


It will allow to see the timestamp ranges of the SSTables. You could also
check the min and max tokens in each SSTable (not sure if you get that info
from 3.0 sstablemetadata) so that you can detect the SSTables that overlap
on token ranges with the ones that carry the tombstones, and have earlier
timestamps. This way you'll be able to trigger manual compactions,
targeting those specific SSTables.
The rule for a tombstone to be purged is that there is no SSTable outside
the compaction that would possibly contain the partition and that would
have older timestamps.

Is this a followup on your previous issue where you were trying to perform
a major compaction on an LCS table?


-----------------
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


On Thu, Jun 20, 2019 at 7:02 AM Jeff Jirsa <jji...@gmail.com> wrote:

> Probably overlapping sstables
>
> Which compaction strategy?
>
>
> > On Jun 19, 2019, at 9:51 PM, Léo FERLIN SUTTON
> <lfer...@mailjet.com.invalid> wrote:
> >
> > I have used the following command to check if I had droppable tombstones
> :
> > `/usr/bin/sstablemetadata --gc_grace_seconds 259200
> /var/lib/cassandra/data/stats/tablename/md-sstablename-big-Data.db`
> >
> > I checked every sstable in a loop and had 4 sstables with droppable
> tombstones :
> >
> > ```
> > Estimated droppable tombstones: 0.1558453651124074
> > Estimated droppable tombstones: 0.20980847354256815
> > Estimated droppable tombstones: 0.30826566640798975
> > Estimated droppable tombstones: 0.45150604672159905
> > ```
> >
> > I changed my compaction configuration this morning (via JMX) to force a
> tombstone compaction. These are my settings on this node :
> >
> > ```
> > {
> > "max_threshold":"32",
> > "min_threshold":"4",
> > "unchecked_tombstone_compaction":"true",
> > "tombstone_threshold":"0.1",
> > "class":"org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy"
> > }
> > ```
> > The threshold is lowed than the amount of tombstones in these sstables
> and I expected the setting `unchecked_tombstone_compaction=True` would
> force cassandra to run a "Tombstone Compaction", yet about 24h later all
> the tombstones are still there.
> >
> > ## About the cluster :
> >
> > The compaction backlog is clear and here are our cassandra settings :
> >
> > Cassandra 3.0.18
> > concurrent_compactors: 4
> > compaction_throughput_mb_per_sec: 150
> > sstable_preemptive_open_interval_in_mb: 50
> > memtable_flush_writers: 4
> >
> >
> > Any idea what I might be missing ?
> >
> > Regards,
> >
> > Leo
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>

Reply via email to