Wrong copy/paste !

Looking at the code, it should do nothing :

 // look up the sstables now that we're on the compaction executor, so we
don't try to re-compact
 // something that was already being compacted earlier.

On 4 September 2017 at 13:54, Nicolas Guyomar <nicolas.guyo...@gmail.com>
wrote:

> You'll get the WARN "Will not compact {}: it is not an active sstable"  :)
>
> On 4 September 2017 at 12:07, Shalom Sagges <shal...@liveperson.com>
> wrote:
>
>> By the way, does anyone know what happens if I run a user defined
>> compaction on an sstable that's already in compaction?
>>
>>
>>
>>
>>
>>
>> On Sun, Sep 3, 2017 at 2:55 PM, Shalom Sagges <shal...@liveperson.com>
>> wrote:
>>
>>> Try this blog by The Last Pickle:
>>>
>>> http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html
>>>
>>>
>>>
>>>
>>>
>>>
>>> Shalom Sagges
>>> DBA
>>> <http://www.linkedin.com/company/164748> <http://twitter.com/liveperson>
>>> <http://www.facebook.com/LivePersonInc> We Create Meaningful Connections
>>>
>>>
>>>
>>> On Sat, Sep 2, 2017 at 8:34 PM, Jeff Jirsa <jji...@gmail.com> wrote:
>>>
>>>> If you're on 3.0 (3.0.6 or 3.0.8 or newer I don't remember which), TWCS
>>>> was designed for ttl-only time series use cases
>>>>
>>>> Alternatively, if you have IO to spare, you may find LCS works as well
>>>> (it'll cause quite a bit more compaction, but a much higher chance to
>>>> compact away tombstones)
>>>>
>>>> There are also tombstone focused sub properties to more aggressively
>>>> compact sstables that have a lot of tombstones - check the docs for
>>>> "unchecked tombstone compaction" and "tombstone threshold" - enabling those
>>>> will enable more aggressive automatic single-sstable compactions
>>>>
>>>> --
>>>> Jeff Jirsa
>>>>
>>>>
>>>> On Sep 2, 2017, at 7:10 AM, qf zhou <zhouqf2...@gmail.com> wrote:
>>>>
>>>>
>>>> Yes, your are right. I am using STCS compaction strategy with some kind
>>>> of timeseries model. Too much disk space has been occupied.
>>>>
>>>> What should I  do to stop  the  disk full ?
>>>>
>>>>  I only want to keep 100 days data most recently,  so I set 
>>>> default_time_to_live
>>>> = 8640000(100 days ).
>>>>
>>>> I know I need to do something to stop the disk space cost, but I really
>>>> don’t know how to do it.
>>>>
>>>>
>>>> Here is the strategy of the big data table :
>>>>
>>>>     AND compaction = {'class': 'org.apache.cassandra.db.compa
>>>> ction.SizeTieredCompactionStrategy', 'max_threshold': '32',
>>>> 'min_threshold': '12', 'tombstone_threshold': '0.1',
>>>> 'unchecked_tombstone_compaction': 'true'}
>>>>     AND compression = {'chunk_length_in_kb': '64', 'class': '
>>>> org.apache.cassandra.io.compress.LZ4Compressor'}
>>>>     AND crc_check_chance = 1.0
>>>>     AND dclocal_read_repair_chance = 0.1
>>>>     AND default_time_to_live = 8640000
>>>>     AND gc_grace_seconds = 432000
>>>>
>>>>
>>>>
>>>> 在 2017年9月2日,下午7:34,Nicolas Guyomar <nicolas.guyo...@gmail.com> 写道:
>>>>
>>>> your are using STCS compaction strategy with some kind of timeseries
>>>> model, and you are going to end up with yor disk full!
>>>>
>>>>
>>>>
>>>
>>
>> This message may contain confidential and/or privileged information.
>> If you are not the addressee or authorized to receive this on behalf of
>> the addressee you must not use, copy, disclose or take action based on this
>> message or any information herein.
>> If you have received this message in error, please advise the sender
>> immediately by reply email and delete this message. Thank you.
>>
>
>

Reply via email to