On Thu, Jun 20, 2019 at 7:37 AM Alexander Dejanovski <a...@thelastpickle.com>
wrote:

> Hi Leo,
>
> The overlapping SSTables are indeed the most probable cause as suggested
> by Jeff.
> Do you know if the tombstone compactions actually triggered? (did the
> SSTables name change?)
>

Hello !

I believe they have changed. I do not remember the sstable name but the
"last modified" has changed recently for these tables.


> Could you run the following command to list SSTables and provide us the
> output? It will display both their timestamp ranges along with the
> estimated droppable tombstones ratio.
>
>
> for f in *Data.db; do meta=$(sstablemetadata -gc_grace_seconds 259200 $f);
> echo $(date --date=@$(echo "$meta" | grep Maximum\ time | cut -d" "  -f3|
> cut -c 1-10) '+%m/%d/%Y %H:%M:%S') $(date --date=@$(echo "$meta" | grep
> Minimum\ time | cut -d" "  -f3| cut -c 1-10) '+%m/%d/%Y %H:%M:%S') $(echo
> "$meta" | grep droppable) $(ls -lh $f); done | sort
>

Here is the results :

```
04/01/2019 22:53:12 03/06/2018 16:46:13 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 16G Apr 13 14:35 md-147916-big-Data.db
04/11/2262 23:47:16 10/09/1940 19:13:17 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 218M Jun 20 05:57 md-167948-big-Data.db
04/11/2262 23:47:16 10/09/1940 19:13:17 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 2.2G Jun 20 05:57 md-167942-big-Data.db
05/01/2019 08:03:24 03/06/2018 16:46:13 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 4.6G May 1 08:39 md-152253-big-Data.db
05/09/2018 06:35:03 03/06/2018 16:46:07 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 30G Apr 13 22:09 md-147948-big-Data.db
05/21/2019 05:28:01 03/06/2018 16:46:16 Estimated droppable tombstones:
0.45150604672159905 -rw-r--r-- 1 cassandra cassandra 1.1G Jun 20 05:55
md-167943-big-Data.db
05/22/2019 11:54:33 03/06/2018 16:46:16 Estimated droppable tombstones:
0.30826566640798975 -rw-r--r-- 1 cassandra cassandra 7.6G Jun 20 04:35
md-167913-big-Data.db
06/13/2019 00:02:40 03/06/2018 16:46:08 Estimated droppable tombstones:
0.20980847354256815 -rw-r--r-- 1 cassandra cassandra 6.9G Jun 20 04:51
md-167917-big-Data.db
06/17/2019 05:56:12 06/16/2019 20:33:52 Estimated droppable tombstones:
0.6114260192855792 -rw-r--r-- 1 cassandra cassandra 257M Jun 20 05:29
md-167938-big-Data.db
06/18/2019 11:21:55 03/06/2018 17:48:22 Estimated droppable tombstones:
0.18655813086540254 -rw-r--r-- 1 cassandra cassandra 2.2G Jun 20 05:52
md-167940-big-Data.db
06/19/2019 16:53:04 06/18/2019 11:22:04 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 425M Jun 19 17:08 md-167782-big-Data.db
06/20/2019 04:17:22 06/19/2019 16:53:04 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 146M Jun 20 04:18 md-167921-big-Data.db
06/20/2019 05:50:23 06/20/2019 04:17:32 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 42M Jun 20 05:56 md-167946-big-Data.db
06/20/2019 05:56:03 06/20/2019 05:50:32 Estimated droppable tombstones: 0.0
-rw-r--r-- 2 cassandra cassandra 4.8M Jun 20 05:56 md-167947-big-Data.db
07/03/2018 17:26:54 03/06/2018 16:46:07 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 27G Apr 13 17:45 md-147919-big-Data.db
09/09/2018 18:55:23 03/06/2018 16:46:08 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 30G Apr 13 18:57 md-147926-big-Data.db
11/30/2018 11:52:33 03/06/2018 16:46:08 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 14G Apr 13 13:53 md-147908-big-Data.db
12/20/2018 07:30:03 03/06/2018 16:46:08 Estimated droppable tombstones: 0.0
-rw-r--r-- 1 cassandra cassandra 9.3G Apr 13 13:28 md-147906-big-Data.db
```

You could also check the min and max tokens in each SSTable (not sure if
> you get that info from 3.0 sstablemetadata) so that you can detect the
> SSTables that overlap on token ranges with the ones that carry the
> tombstones, and have earlier timestamps. This way you'll be able to trigger
> manual compactions, targeting those specific SSTables.
>

I have checked and I don't believe the info is available in the 3.0.X
version of sstablemetadata :(


> The rule for a tombstone to be purged is that there is no SSTable outside
> the compaction that would possibly contain the partition and that would
> have older timestamps.
>
 Is there a way to log these checks and decisions made by the compaction
thread ?


> Is this a followup on your previous issue where you were trying to perform
> a major compaction on an LCS table?
>

In some way.

We are trying to globally reclaim the data used up by our tombstones (on
more than one table). We have recently started to purge old data in our
cassandra cluster, and since (on cloud providers) `Disk space isn't cheap`
we are trying to be sure the data correctly expires and the disk space is
reclaimed !

The major compaction on the LCS table was one of our unsuccessful attempts
(too long and too much disk space used, so abandoned), and we are currently
trying to tweak the compaction parameters to speed things up.

Regards.

Leo

On Thu, Jun 20, 2019 at 7:02 AM Jeff Jirsa <jji...@gmail.com> wrote:
>
>> Probably overlapping sstables
>>
>> Which compaction strategy?
>>
>>
>> > On Jun 19, 2019, at 9:51 PM, Léo FERLIN SUTTON
>> <lfer...@mailjet.com.invalid> wrote:
>> >
>> > I have used the following command to check if I had droppable
>> tombstones :
>> > `/usr/bin/sstablemetadata --gc_grace_seconds 259200
>> /var/lib/cassandra/data/stats/tablename/md-sstablename-big-Data.db`
>> >
>> > I checked every sstable in a loop and had 4 sstables with droppable
>> tombstones :
>> >
>> > ```
>> > Estimated droppable tombstones: 0.1558453651124074
>> > Estimated droppable tombstones: 0.20980847354256815
>> > Estimated droppable tombstones: 0.30826566640798975
>> > Estimated droppable tombstones: 0.45150604672159905
>> > ```
>> >
>> > I changed my compaction configuration this morning (via JMX) to force a
>> tombstone compaction. These are my settings on this node :
>> >
>> > ```
>> > {
>> > "max_threshold":"32",
>> > "min_threshold":"4",
>> > "unchecked_tombstone_compaction":"true",
>> > "tombstone_threshold":"0.1",
>> >
>> "class":"org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy"
>> > }
>> > ```
>> > The threshold is lowed than the amount of tombstones in these sstables
>> and I expected the setting `unchecked_tombstone_compaction=True` would
>> force cassandra to run a "Tombstone Compaction", yet about 24h later all
>> the tombstones are still there.
>> >
>> > ## About the cluster :
>> >
>> > The compaction backlog is clear and here are our cassandra settings :
>> >
>> > Cassandra 3.0.18
>> > concurrent_compactors: 4
>> > compaction_throughput_mb_per_sec: 150
>> > sstable_preemptive_open_interval_in_mb: 50
>> > memtable_flush_writers: 4
>> >
>> >
>> > Any idea what I might be missing ?
>> >
>> > Regards,
>> >
>> > Leo
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>

Reply via email to