Hello Aneesh,
Reading your message and answers given, I really think this post I wrote
about 3 years ago now (how quickly time goes through...) about tombstone
might be of interest to you:
https://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html.
Your problem is not related
On Tue, Jun 18, 2019 at 8:06 AM ANEESH KUMAR K.M wrote:
>
> I am using Cassandra cluster with 3 nodes which is hosted on AWS. Also we
> have NodeJS web Application which is on AWS ELB. Now the issue is that,
> when I add 2 or more servers (nodeJS) in AWS ELB then the delete queries
> are not
This is nearly impossible to answer without much more info, but suspect you’re
either:
Using very weak consistency levels or some weirdness with data centers /
availability zones (like simplestrategy and local_*), or
Have bad clocks / no ntp / wrong time zones,
> On Jun 17, 2019, at 11:05
Hi,
I am using Cassandra cluster with 3 nodes which is hosted on AWS. Also we
have NodeJS web Application which is on AWS ELB. Now the issue is that,
when I add 2 or more servers (nodeJS) in AWS ELB then the delete queries
are not working on Cassandra.
Its working when there is only one server
s are fine.
>
> Sadly there's a lot of hand wringing about tombstones in the generic
> sense which leads people to try to work around *every* case where
> they're used. This is unnecessary. A tombstone over a single row
> isn't a problem, especially if you're only fetching that one
reappearing.
Regards
Alok
> On 9 Apr 2019, at 15:56, Jon Haddad wrote:
>
> Normal deletes are fine.
>
> Sadly there's a lot of hand wringing about tombstones in the generic
> sense which leads people to try to work around *every* case where
> they're used. This is unneces
Normal deletes are fine.
Sadly there's a lot of hand wringing about tombstones in the generic
sense which leads people to try to work around *every* case where
they're used. This is unnecessary. A tombstone over a single row
isn't a problem, especially if you're only fetching that one row back
Would query "SELECT * FROM
myTable WHERE course_id = 'C' AND assignment_id = 'A2';" be affected too?
For query "SELECT * FROM myTable WHERE course_id = 'C';", to workaround the
tombstone problem, we are thinking about not doing hard deletes, instead
doing soft deletes. So ins
n our blog <http://thelastpickle.com/blog/> that cover
the tombstones and compaction strategies topic (search for "tombstone" on
that page), notably this one:
http://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html
Cheers,
On Sat, Mar 16, 2019 at 1:04 AM Nick Hatfield
wrote:
Hey guys,
Can someone give me some idea or link some good material for determining a good
/ aggressive tombstone strategy? I want to make sure my tombstones are getting
purged as soon as possible to reclaim disk.
Thanks
>> Hello all
>>
>> I have tried to sum up all rules related to tombstone removal:
>>
>> ----
>> --
>>
>> Given a tombstone written at timestamp (t) for a partition key
Yes it does. Consider if it didn't and you kept writing to the same
partition, you'd never be able to remove any tombstones for that partition.
On Tue., 6 Nov. 2018, 19:40 DuyHai Doan Hello all
>
> I have tried to sum up all rules related to tombstone r
Hello all
I have tried to sum up all rules related to tombstone removal:
--
Given a tombstone written at timestamp (t) for a partition key (P) in
SSTable (S1). This tombstone will be removed:
1) after
Executive Officer
m 202.905.2818
Anant Corporation
1010 Wisconsin Ave NW, Suite 250
Washington, D.C. 20007
We build and manage digital business technology platforms.
On Aug 24, 2018, 1:46 AM -0400, Charulata Sharma (charshar)
, wrote:
> Hi All,
>
> I have shared my experience of
Hi All,
I have shared my experience of tombstone clearing in this blog post.
Sharing it in this forum for wider distribution.
https://medium.com/cassandra-tombstones-clearing-use-case/the-curios-case-of-tombstones-d897f681a378
Thanks,
Charu
hing else lined up properly to solve a queue problem.
>
>
> Sean Durity
>
> From: Abhishek Singh
> Sent: Tuesday, June 19, 2018 10:41 AM
> To: user@cassandra.apache.org
> Subject: [EXTERNAL] Re: Tombstone
>
> The Partition key is made of datet
:41 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Tombstone
The Partition key is made of datetime(basically date
truncated to hour) and bucket.I think your RCA may be correct since we are
deleting the partition rows one by one not in a batch files maybe overlapping
> >
> > Hi all,
> >We using Cassandra for storing events which are time series
> based for batch processing once a particular batch based on hour is
> processed we delete the entries but we were left with almost 18% deletes
> marked as Tombstones.
> >
d as
> Tombstones.
> I ran compaction on the particular CF tombstone didn't come
> down.
> Can anyone suggest what is the optimal tunning/recommended
> practice used for compaction strategy and GC_grace period with 100k entries
> and deletes every hour
18% deletes marked as
> Tombstones.
> I ran compaction on the particular CF tombstone didn't come
> down.
> Can anyone suggest what is the optimal tunning/recommended
> practice used for compaction strategy and GC_grace period with 100k entries
> and d
tombstone didn't
come down.
Can anyone suggest what is the optimal tunning/recommended
practice used for compaction strategy and GC_grace period with 100k entries
and deletes every hour.
Warm Regards
Abhishek Singh
> handoff to replay the database mutationsthe node missed while it was
> down. Cassandra does not replay a mutation for a tombstoned record
> during its grace period.".
>
> The tombstone here is on the recovered node or coordinator?
> The tombstone is a special write record, so i
toned record
during its grace period.".
The tombstone here is on the recovered node or coordinator?
The tombstone is a special write record, so it must have writetime.
We could compare the writetime between the version in the hint and the
version of the tombstone, which is enough to make choice,
;lu...@maurobenevides.com.br>
wrote:
> Dear community,
>
> I have been using TWCS in my lab, with TTL'd data.
> In the debug log there is always the sentence:
> "TimeWindowCompactionStrategy.java:65 Disabling tombstone compactions for
> TWCS". Indeed, the line is always r
Dear community,
I have been using TWCS in my lab, with TTL'd data.
In the debug log there is always the sentence:
"TimeWindowCompactionStrategy.java:65 Disabling tombstone compactions for
TWCS". Indeed, the line is always repeated.
What does it actually mean? If my data gets expired
Hello Simon.
Tombstone is a tricky topic in Cassandra that brought a lot of questions
over time. I exposed my understanding in a blog post last year and thought
it might be of interest for you, even though things probably evolved a bit,
principles and tuning did not change that much I guess
Got it. Thank you.
From: Meg Mara
Date: 2017-12-05 01:54
To: user@cassandra.apache.org
Subject: RE: Tombstone warnings in log file
Simon,
It means that in processing your queries, Cassandra is going through that many
tombstone cells in order to return your results. It is because some
Simon,
It means that in processing your queries, Cassandra is going through that many
tombstone cells in order to return your results. It is because some of the
partitions that you are querying for have already expired. The warning is just
cassandra's way of letting you know that your reads
Hi,
My cluster is running 2.2.8, no update and deletion, only insertion with TTL.
I saw below warnings reacently. What's the meaning of them and what's the
impact?
WARN [SharedPool-Worker-2] 2017-12-04 09:32:48,833 SliceQueryFilter.java:308 -
Read 2461 live and 1978 tombstone cells
sonInc> We
>>>> Create Meaningful Connections
>>>>
>>>>
>>>>
>>>> On Sat, Sep 2, 2017 at 8:34 PM, Jeff Jirsa <jji...@gmail.com> wrote:
>>>>
>>>>> If you're on 3.0 (3.0.6 or 3.0.8 or newer I don't remem
ebook.com/LivePersonInc> We Create Meaningful Connections
>>>
>>>
>>>
>>> On Sat, Sep 2, 2017 at 8:34 PM, Jeff Jirsa <jji...@gmail.com> wrote:
>>>
>>>> If you're on 3.0 (3.0.6 or 3.0.8 or newer I don't remember which), TWCS
>>&g
:34 PM, Jeff Jirsa <jji...@gmail.com> wrote:
>>
>>> If you're on 3.0 (3.0.6 or 3.0.8 or newer I don't remember which), TWCS
>>> was designed for ttl-only time series use cases
>>>
>>> Alternatively, if you have IO to spare, you may find LCS works a
as well
>> (it'll cause quite a bit more compaction, but a much higher chance to
>> compact away tombstones)
>>
>> There are also tombstone focused sub properties to more aggressively
>> compact sstables that have a lot of tombstones - check the docs for
>> "unchecked tombst
it'll cause quite a bit more compaction, but a much higher chance to
> compact away tombstones)
>
> There are also tombstone focused sub properties to more aggressively
> compact sstables that have a lot of tombstones - check the docs for
> "unchecked tombstone compaction" and &quo
)
There are also tombstone focused sub properties to more aggressively compact
sstables that have a lot of tombstones - check the docs for "unchecked
tombstone compaction" and "tombstone threshold" - enabling those will enable
more aggressive automatic single-sstable compacti
Yes, your are right. I am using STCS compaction strategy with some kind of
timeseries model. Too much disk space has been occupied.
What should I do to stop the disk full ?
I only want to keep 100 days data most recently, so I set
default_time_to_live = 864(100 days ).
I know I
-uber.jar -l
>>>> localhost:7199
>>>>
>>>> In the above, I am using a jmx method. But it seems that the file size
>>>> doesn’t change. My command is wrong ?
>>>>
>>>> > 在 2017年9月1日,下午2:17,Jeff Jirsa <jji...@gmail.com> 写道:
>> In the above, I am using a jmx method. But it seems that the file size
>>>> doesn’t change. My command is wrong ?
>>>>
>>>> > 在 2017年9月1日,下午2:17,Jeff Jirsa <jji...@gmail.com
>>>> > <mailto:jji...@gmail.com>> 写道:
>>>&g
er defined compaction to do a single sstable compaction on just that
>>> sstable
>>> >
>>> > It's a nodetool command in very recent versions, or a jmx method in
>>> older versions
>>> >
>>> >
>>> > --
>>> > Jeff Jirs
as about 1.5T data in the disk.
>> I found some sstables file are over 300G. Using the sstablemetadata command, I found it: Estimated droppable tombstones: 0.9622972799707109.
>> It is obvious that too much tombstone data exists.
>> The default_time_to_live = 864(100 days)
In the above, I am using a jmx method. But it seems that the file size
>>>> doesn’t change. My command is wrong ?
>>>>
>>>> > 在 2017年9月1日,下午2:17,Jeff Jirsa <jji...@gmail.com> 写道:
>>>> >
>>>> > User defined compaction to do a si
;>> sstable
>>> >
>>> > It's a nodetool command in very recent versions, or a jmx method in
>>> older versions
>>> >
>>> >
>>> > --
>>> > Jeff Jirsa
>>> >
>>> >
>>> >> O
>> > It's a nodetool command in very recent versions, or a jmx method in older
>> > versions
>> >
>> >
>> > --
>> > Jeff Jirsa
>> >
>> >
>> >> On Aug 31, 2017, at 11:04 PM, qf zhou <zhouqf2...@gmail.co
--
>> > Jeff Jirsa
>> >
>> >
>> >> On Aug 31, 2017, at 11:04 PM, qf zhou <zhouqf2...@gmail.com> wrote:
>> >>
>> >> I am using a cluster with 3 nodes and the cassandra version is
>> 3.0.9. I have used it about 6 months. Now e
sion is 3.0.9. I
> >> have used it about 6 months. Now each node has about 1.5T data in the disk.
> >> I found some sstables file are over 300G. Using the sstablemetadata
> >> command, I found it: Estimated droppable tombstones: 0.9622972799707109.
> >> It i
.0.9. I have used it about 6 months. Now each node has about 1.5T data in
> the disk.
> >> I found some sstables file are over 300G. Using the sstablemetadata
> command, I found it: Estimated droppable tombstones: 0.9622972799707109.
> >> It is obvious that too much tombston
gt;> have used it about 6 months. Now each node has about 1.5T data in the disk.
>> I found some sstables file are over 300G. Using the sstablemetadata
>> command, I found it: Estimated droppable tombstones: 0.9622972799707109.
>> It is obvious that too much tombstone dat
It is obvious that too much tombstone data exists.
> The default_time_to_live = 864(100 days) and gc_grace_seconds =
> 432000(5 days). Using nodetool compactionstats, I found the some compaction
> processes exists.
> So I really want to know how to clear tombstone data ? otherwise th
.
It is obvious that too much tombstone data exists.
The default_time_to_live = 864(100 days) and gc_grace_seconds = 432000(5
days). Using nodetool compactionstats, I found the some compaction processes
exists.
So I really want to know how to clear tombstone data ? otherwise the disk
Accoding to http://docs.datastax.com/en/cql/3.1/cql/ddl/ddl_when_use_
index_c.html#concept_ds_sgh_yzz_zj__upDatIndx
> Cassandra stores tombstones in the index until the tombstone limit
reaches 100K cells. After exceeding the tombstone limit, the query that
uses the indexed value will fail.
Very appreciate to all of you, I’ll study the blog.
From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: 2016年11月16日 23:26
To: user@cassandra.apache.org
Cc: Fabrice Facorat
Subject: Re: Some questions to updating and tombstone
Hi Boying,
Old value is not tombstone, but remains until
Hi Boying,
Old value is not tombstone, but remains until compaction
Be careful, the above is generally true but not necessary.
Tombstones can actually be generated while using update in some corner
cases. Using collections or prepared statements.
I wrote a detailed blog post about deletes
t tombstones, don't generate them ;)
>
> More seriously, tombstones are generated when:
> - doing a DELETE
> - TTL expiration
> - set a column to NULL
>
> However tombstones are an issue only if for the same value, you have many
> tombstones (i.e you keep overwriting the same values wit
and
tombstones). Having 1 tombstone for 1 value is not an issue, having 1000
tombstone for 1 value is a problem. Do really your use case overwrite data
with DELETE or NULL ?
So that's why what you may want to know is how many tombstones you have on
average when reading a value. This is available
to updating and tombstone
Hi Boying,
I agree with Vladimir.If compaction is not compacting the two sstables with
updates soon, disk space issues will be wasted. For example, if the updates are
not closer in time, first update might be in a big table by the time second
update is being written
write new value with new time stamp. Old value is not tombstone, but
remains until compaction. gc_grace_period is not related to this.
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.
On Mon, 14 Nov 2016 03:02:21 -0500Lu, Boying &
Hi Boying,
UPDATE write new value with new time stamp. Old value is not tombstone, but
remains until compaction. gc_grace_period is not related to this.
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra
Launch your cluster in minutes.
On Mon, 14 Nov 2016 03:02
Hi, All,
Will the Cassandra generates a new tombstone when updating a column by using
CQL update statement?
And is there any way to get the number of tombstones of a column family since
we want to void generating
too many tombstones within gc_grace_period?
Thanks
Boying
unsubsribe
Sent from Yahoo Mail on Android
On Tue, 8 Nov, 2016 at 2:11 pm, Oleg Krayushkin<allight...@gmail.com> wrote:
Hi, could you please clarify: 100k tombstone limit for SE is per CF, cf-node,
original sstable or (very unlikely) partition?
Thanks!--
Oleg Krayushkin
Hi, could you please clarify: 100k tombstone limit for SE is per CF,
cf-node, original sstable or (very unlikely) partition?
Thanks!
--
Oleg Krayushkin
be more specific on how you are running repair ? What's the
>>>>> precise command line for that, does it run on several nodes at the same
>>>>> time, etc...
>>>>> What is your gc_grace_seconds ?
>>>>> Do you see errors in your logs that would be
>>>> What is your gc_grace_seconds ?
>>>> Do you see errors in your logs that would be linked to repairs
>>>> (Validation failure or failure to create a merkle tree)?
>>>>
>>>> You seem to mention a single node that went down but say the
ween the node that went down and the
>>> fact that deleted data comes back to life ?
>>> What is your strategy for cyclic maintenance repair (schedule, command
>>> line or tool, etc...) ?
>>>
>>> Thanks,
>>>
>>> On Thu, Sep 29, 2016 at 1
anks,
>>
>> On Thu, Sep 29, 2016 at 10:40 AM Atul Saroha <atul.sar...@snapdeal.com>
>> wrote:
>>
>>> Hi,
>>>
>>> We have seen a weird behaviour in cassandra 3.6.
>>> Once our node was went down more than 10 hrs. After that, w
, Sep 29, 2016 at 10:40 AM Atul Saroha <atul.sar...@snapdeal.com>
> wrote:
>
>> Hi,
>>
>> We have seen a weird behaviour in cassandra 3.6.
>> Once our node was went down more than 10 hrs. After that, we had ran
>> Nodetool repair multiple times. But tombstone
, command line
or tool, etc...) ?
Thanks,
On Thu, Sep 29, 2016 at 10:40 AM Atul Saroha <atul.sar...@snapdeal.com>
wrote:
> Hi,
>
> We have seen a weird behaviour in cassandra 3.6.
> Once our node was went down more than 10 hrs. After that, we had ran
> Nodetool repair multiple
Hi,
We have seen a weird behaviour in cassandra 3.6.
Once our node was went down more than 10 hrs. After that, we had ran
Nodetool repair multiple times. But tombstone are not getting sync properly
over the cluster. On day- today basis, on expiry of every grace period,
deleted records start
I bulkloaded a few tables using CQLSStableWrite/sstableloader. The data are
large amount of wide rows with lots of null's. It takes one day or two for
the compaction to complete. sstable count is at single digit. Maximum
partition size is ~50M and mean size is ~5M. However I am seeing frequent
10505
>
> There are buggy versions of cassandra that will multiple tombstones during
> compaction. 2.1.12 SHOULD correct that, if you’re on 2.1.
>
>
>
> From: Kai Wang
> Reply-To: "user@cassandra.apache.org"
> Date: Monday, December 7, 2015 at 3:46 PM
> To: "us
The nulls in the original data created the tombstones. They won’t go away until
gc_grace_seconds have passed (default is 10 days).
On Dec 7, 2015, at 4:46 PM, Kai Wang wrote:
> I bulkloaded a few tables using CQLSStableWrite/sstableloader. The data are
> large amount of wide
;user@cassandra.apache.org"
Date: Monday, December 7, 2015 at 3:46 PM
To: "user@cassandra.apache.org"
Subject: lots of tombstone after compaction
I bulkloaded a few tables using CQLSStableWrite/sstableloader. The data are
large amount of wide rows with lots of null's. It takes one day or two
Hello, I have a question about the tombstone removal process for leveled
compaction strategy. I am migrating a lot of text data from a cassandra
column family to elastic search. The column family uses leveled compaction
strategy. As part of the migration, I am deleting the migrated rows from
in understanding how tombstone threshold is
implemented. And ticket also says that running major compaction weekly is
an alternative. I actually want to understand if I run major compaction on
a cf with 500gb of data and a single giant file is created. Do you see any
problems with Cassandra processing
Great !!! Thanks Andrei !!! Thats the answer I was looking for :)
Thanks
Anuj Wadehra
Sent from Yahoo Mail on Android
From:Andrei Ivanov aiva...@iponweb.net
Date:Thu, 23 Apr, 2015 at 11:57 pm
Subject:Re: Drawbacks of Major Compaction now that Automatic Tombstone
Compaction Exists
Just
Thanks Robert!!
The JIRA was very helpful in understanding how tombstone threshold is
implemented. And ticket also says that running major compaction weekly is an
alternative. I actually want to understand if I run major compaction on a cf
with 500gb of data and a single giant file is created
On Tue, Apr 14, 2015 at 8:29 PM, Anuj Wadehra anujw_2...@yahoo.co.in
wrote:
By automatic tombstone compaction, I am referring to tombstone_threshold
sub property under compaction strategy in CQL. It is 0.2 by default. So
what I understand from the Datastax documentation is that even
Hi Robert,
Any comments or suggestions ?
Thanks
Anuj Wadehra
Sent from Yahoo Mail on Android
From:Anuj Wadehra anujw_2...@yahoo.co.in
Date:Wed, 15 Apr, 2015 at 8:59 am
Subject:Re: Drawbacks of Major Compaction now that Automatic Tombstone
Compaction Exists
Hi Robert,
By automatic
Hi Robert,
By automatic tombstone compaction, I am referring to tombstone_threshold sub
property under compaction strategy in CQL. It is 0.2 by default. So what I
understand from the Datastax documentation is that even if a sstable does not
find sstables of similar size (STCS) , an automatic
On Mon, Apr 13, 2015 at 10:52 AM, Anuj Wadehra anujw_2...@yahoo.co.in
wrote:
Any comments on side effects of Major compaction especially when sstable
generated is 100+ GB?
I have no idea how this interacts with the automatic compaction stuff; if
you find out, let us know?
But if you want to
Any comments on side effects of Major compaction especially when sstable
generated is 100+ GB?
After Cassandra 1.2 , automated tombstone compaction occurs even on a single
sstable if tombstone percentage increases the tombstone_threshold sub property
specified in compaction strategy. So
Rob,
Does that mean once you split it back into small ones, automatic compaction a
will continue to happen on a more frequent basis now that it's no longer a
single large monolith?
Rahul
On Apr 13, 2015, at 3:23 PM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Apr 13, 2015 at 10:52 AM,
On Mon, Apr 13, 2015 at 12:26 PM, Rahul Neelakantan ra...@rahul.be wrote:
Does that mean once you split it back into small ones, automatic
compaction a will continue to happen on a more frequent basis now that it's
no longer a single large monolith?
That's what the word size tiered means in
are not impacted, we were left with no option
but to run major compaction to ensure that thousands of tiny sstables are
compacted.
Queries:
Does major compaction has any drawback after automatic tombstone
compaction got implemented in 1.2 via tombstone_threshold
sub-property(CASSANDRA-3442
drawback after automatic tombstone compaction got
implemented in 1.2 via tombstone_threshold sub-property(CASSANDRA-3442)?
I understand that the huge SSTable created after major compaction wont be
compacted with new data any time soon but is that a problem if purged data is
removed via automatic
are compacted.
Queries:
Does major compaction has any drawback after automatic tombstone compaction got
implemented in 1.2 via tombstone_threshold sub-property(CASSANDRA-3442)?
I understand that the huge SSTable created after major compaction wont be
compacted with new data any time soon
Feel of the data that we have is -
8000 rowkeys per day and columns are added throughout the day. 300K
columns on an average per rowKey.
*From:* Alain RODRIGUEZ [mailto:arodr...@gmail.com]
*Sent:* Friday, January 30, 2015 4:26 AM
*To:* user@cassandra.apache.org
*Subject:* Re: Tombstone
and columns are added throughout the day. 300K columns on
an average per rowKey.
From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: Friday, January 30, 2015 4:26 AM
To: user@cassandra.apache.org
Subject: Re: Tombstone gc after gc grace seconds
The point is that all the parts or fragments
rows / time
series) your only way to get rid of tombstone is a major compaction.
That's how I understand this.
Hope this help,
C*heers,
Alain
2015-01-30 1:29 GMT+01:00 Mohammed Guller moham...@glassbeam.com:
Ravi -
It may help.
What version are you running? Do you know if minor
. unchecked_tombstone_compaction - True enables more aggressive than
normal tombstone compactions. A single SSTable tombstone compaction runs
without checking the likelihood of success. Cassandra 2.0.9 and later.
Could I use these to get what I want?
Problem I am encountering is even long
To: user@cassandra.apache.org
Subject: RE: Tombstone gc after gc grace seconds
Hi,
I saw there are 2 more interesting parameters –
a. tombstone_threshold - A ratio of garbage-collectable tombstones to all
contained columns, which if exceeded by the SSTable triggers compaction (with
no other
My understanding is consistent with Alain's, there's no way to force a
tombstone-only compaction, your only option is major compaction. If you're
using size tiered, that comes with its own drawbacks.
I wonder if there's a technical limitation that prevents introducing a
shadowed data cleanup
...@clearpoolgroup.com:
Hi,
I want to trigger just tombstone compaction after gc grace seconds is
completed not nodetool compact keyspace column family.
Anyway I can do that?
Thanks
Yep, you may register and log into the Apache JIRA and click Vote for this
issue, in the upper right-side of the ticket.
On Wed, Jan 21, 2015 at 11:30 PM, Ian Rose ianr...@fullstory.com wrote:
Ah, thanks for the pointer Philip. Is there any kind of formal way to
vote up issues? I'm assuming
Ian,
Leaving a comment explaining your situation and how, as an operator of a
Cassandra Cluster, this would be valuable, would probably help most.
On Thu, Jan 22, 2015 at 6:06 AM, Paulo Ricardo Motta Gomes
paulo.mo...@chaordicsystems.com wrote:
Yep, you may register and log into the Apache
Hi,
I want to trigger just tombstone compaction after gc grace seconds is completed
not nodetool compact keyspace column family.
Anyway I can do that?
Thanks
There is an open ticket for this improvement at
https://issues.apache.org/jira/browse/CASSANDRA-8561
On Wed, Jan 21, 2015 at 4:55 PM, Ian Rose ianr...@fullstory.com wrote:
When I see a warning like Read 9 live and 5769 tombstoned cells in ...
etc is there a way for me to see the partition key
When I see a warning like Read 9 live and 5769 tombstoned cells in ...
etc is there a way for me to see the partition key that this query was
operating on?
The description in the original JIRA ticket (
https://issues.apache.org/jira/browse/CASSANDRA-6042) reads as though
exposing this information
Ah, thanks for the pointer Philip. Is there any kind of formal way to
vote up issues? I'm assuming that adding a comment of +1 or the like
is more likely to be *counter*productive.
- Ian
On Wed, Jan 21, 2015 at 5:02 PM, Philip Thompson
philip.thomp...@datastax.com wrote:
There is an open
columns was requested, slices=[-],
delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
I run the command:
nodetool compact system
But the tombstone number does not decrease. I still see the warnings with the
exact number of tombstones.
Why is this happening? What should I do
tombstone_warn_threshold). 2147449199 columns was requested, slices=[-],
delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
I run the command:
nodetool compact system
But the tombstone number does not decrease. I still see the warnings with the
exact number of tombstones.
Why
1 - 100 of 179 matches
Mail list logo