ill
> complexify your code, but it will prevent severe performance issues in
> Cassandra.
>
> Tombstones won't be a problem for repair, they will get repaired as
> classic cells. They negatively affect the read path mostly, and use space
> on disk.
>
> On Tue, Jan 16, 2018
partitions
> that have no chance to be expired yet.
> Those techniques usually work better with TWCS, but the former could make
> you hit a lot of SSTables if your partitions can spread over all time
> buckets, so only use TWCS if you can restrict individual reads to up to 4
> time w
end it to replicas during
> reads to cover all possible cases.
>
>
> On Fri, Jan 12, 2018 at 5:28 PM Python_Max wrote:
>
>> Thank you for response.
>>
>> I know about the option of setting TTL per column or even per item in
>> collection. However in my exampl
mply because technically it is possible to set different TTL value
> on each column of a CQL row
>
> On Wed, Jan 10, 2018 at 2:59 PM, Python_Max wrote:
>
>> Hello, C* users and experts.
>>
>> I have (one more) question about tombstones.
>>
>> Consider th
stable it can just set the properties to match that of
> the sstable to prevent this.
>
> Chris
>
> On Wed, Jan 10, 2018 at 4:16 AM, Python_Max wrote:
>
>> Hello all.
>>
>> I have an error when trying to dump SSTable (Cassandra 3.11.1):
>>
>> $ ssta
design of Cassandra. Here it's probably much easier to just follow
>> recommended procedure when adding and removing nodes.
>>
>> On 16 Dec. 2017 01:37, "Python_Max" wrote:
>>
>> Hello, Jeff.
>>
>>
>> Using your hint I was able to reproduce
"c3", "deletion_info" : { "local_delete_time" :
"2018-01-10T13:29:25Z" }
}
]
}
]
}
]
The question is why Cassandra creates a tombstone for every column instead
of single tombstone per row?
In production environment I have a table with ~30 columns and It gives me a
warning for 30k tombstones and 300 live rows. It is 30 times more then it
could be.
Can this behavior be tuned in some way?
Thanks.
--
Best regards,
Python_Max.
ssues like this in bug tracker.
Shouldn't sstabledump be read only?
--
Best regards,
Python_Max.
h keys is 'nodetool cleanup', isn't it?
On 14.12.17 16:14, kurt greaves wrote:
Are you positive your repairs are completing successfully? Can you
send through an example of the data in the wrong order? What you're
saying certainly shouldn't happen, but there's a lot
e added would have some marker and never
returned as result to select query.
Thank you very much, Jeff, for pointing me in right direction.
On 13.12.17 18:43, Jeff Jirsa wrote:
Did you run cleanup before you shrank the cluster?
--
Best Regards,
Python_Max.
ombie data?
On 13.12.17 18:43, Jeff Jirsa wrote:
Did you run cleanup before you shrank the cluster?
--
Best Regards,
Python_Max.
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-ma
node and using 'nodetool
removenode' in case the node itself is streaming wrong data on
decommission but that did not work either (deleted data is back to life).
Is this a known issue?
PS: I have not tried 'nodetool scrub' yet nor dropping r
12 matches
Mail list logo