Re: Too many tombstones using TTL

2018-01-16 Thread Python_Max
ill > complexify your code, but it will prevent severe performance issues in > Cassandra. > > Tombstones won't be a problem for repair, they will get repaired as > classic cells. They negatively affect the read path mostly, and use space > on disk. > > On Tue, Jan 16, 2018

Re: Too many tombstones using TTL

2018-01-16 Thread Python_Max
partitions > that have no chance to be expired yet. > Those techniques usually work better with TWCS, but the former could make > you hit a lot of SSTables if your partitions can spread over all time > buckets, so only use TWCS if you can restrict individual reads to up to 4 > time w

Re: Too many tombstones using TTL

2018-01-16 Thread Python_Max
end it to replicas during > reads to cover all possible cases. > > > On Fri, Jan 12, 2018 at 5:28 PM Python_Max wrote: > >> Thank you for response. >> >> I know about the option of setting TTL per column or even per item in >> collection. However in my exampl

Re: Too many tombstones using TTL

2018-01-12 Thread Python_Max
mply because technically it is possible to set different TTL value > on each column of a CQL row > > On Wed, Jan 10, 2018 at 2:59 PM, Python_Max wrote: > >> Hello, C* users and experts. >> >> I have (one more) question about tombstones. >> >> Consider th

Re: sstabledump tries to delete a file

2018-01-12 Thread Python_Max
stable it can just set the properties to match that of > the sstable to prevent this. > > Chris > > On Wed, Jan 10, 2018 at 4:16 AM, Python_Max wrote: > >> Hello all. >> >> I have an error when trying to dump SSTable (Cassandra 3.11.1): >> >> $ ssta

Re: Deleted data comes back on node decommission

2018-01-10 Thread Python_Max
design of Cassandra. Here it's probably much easier to just follow >> recommended procedure when adding and removing nodes. >> >> On 16 Dec. 2017 01:37, "Python_Max" wrote: >> >> Hello, Jeff. >> >> >> Using your hint I was able to reproduce

Too many tombstones using TTL

2018-01-10 Thread Python_Max
"c3", "deletion_info" : { "local_delete_time" : "2018-01-10T13:29:25Z" } } ] } ] } ] The question is why Cassandra creates a tombstone for every column instead of single tombstone per row? In production environment I have a table with ~30 columns and It gives me a warning for 30k tombstones and 300 live rows. It is 30 times more then it could be. Can this behavior be tuned in some way? Thanks. -- Best regards, Python_Max.

sstabledump tries to delete a file

2018-01-10 Thread Python_Max
ssues like this in bug tracker. Shouldn't sstabledump be read only? -- Best regards, Python_Max.

Re: Deleted data comes back on node decommission

2017-12-15 Thread Python_Max
h keys is 'nodetool cleanup', isn't it? On 14.12.17 16:14, kurt greaves wrote: Are you positive your repairs are completing successfully? Can you send through an example of the data in the wrong order? What you're saying certainly shouldn't happen, but there's a lot

Re: Deleted data comes back on node decommission

2017-12-15 Thread Python_Max
e added would have some marker and never returned as result to select query. Thank you very much, Jeff, for pointing me in right direction. On 13.12.17 18:43, Jeff Jirsa wrote: Did you run cleanup before you shrank the cluster? -- Best Regards, Python_Max.

Re: Deleted data comes back on node decommission

2017-12-14 Thread Python_Max
ombie data? On 13.12.17 18:43, Jeff Jirsa wrote: Did you run cleanup before you shrank the cluster? -- Best Regards, Python_Max. - To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org For additional commands, e-ma

Deleted data comes back on node decommission

2017-12-13 Thread Python_Max
node and using 'nodetool removenode' in case the node itself is streaming wrong data on decommission but that did not work either (deleted data is back to life). Is this a known issue? PS: I have not tried 'nodetool scrub' yet nor dropping r