I don't think that statement is accurate.
Which part ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 13/11/2012, at 6:31 AM, Binh Nguyen binhn...@gmail.com wrote:
I don't think that statement is accurate. The minor compaction is still
On Nov 13, 2012, at 8:54 AM, aaron morton aa...@thelastpickle.com wrote:
I don't think that statement is accurate.
Which part ?
Probably this part:
After running a major compaction, automatic minor compactions are no longer
triggered, frequently requiring you to manually run major compactions
Correct
On Nov 13, 2012, at 5:21 AM, André Cruz andre.c...@co.sapo.pt wrote:
On Nov 13, 2012, at 8:54 AM, aaron morton aa...@thelastpickle.com wrote:
I don't think that statement is accurate.
Which part ?
Probably this part:
After running a major compaction, automatic minor compactions
Minor compactions will still be triggered whenever a size tier gets 4+ sstables
(for the default compaction strategy). So it does not affect new data.
It just takes longer for the biggest size tier to get to 4 files. So it takes
longer to compact the big output from the major compaction.
I don't think that statement is accurate. The minor compaction is still
triggered for small sstables but for the big sstables it may or may not.
By default Cassandra will wait until it finds 4 sstables of the same size
to trigger the compaction so if the sstables are big then it may take a
while
On Nov 11, 2012, at 12:01 AM, Binh Nguyen binhn...@gmail.com wrote:
FYI: Repair does not remove tombstones. To remove tombstones you need to run
compaction.
If you have a lot of data then make sure you run compaction on all nodes
before running repair. We had a big trouble with our system
If you have a long lived row with a lot of tombstones or overwrites, it's often
more efficient to select a known list of columns. There are short circuits in
the read path that can avoid older tombstones filled fragments of the row being
read. (Obviously this is hard to do if you don't know the
FYI: Repair does not remove tombstones. To remove tombstones you need to
run compaction.
If you have a lot of data then make sure you run compaction on all nodes
before running repair. We had a big trouble with our system regarding
tombstone and it took us long time to figure out the reason. It
That must be it. I dumped the sstables to json and there are lots of records,
including ones that are returned to my application, that have the deletedAt
attribute. I think this is because the regular repair job was not running for
some time, surely more than the grace period, and lots of
On Nov 7, 2012, at 12:15 PM, André Cruz andre.c...@co.sapo.pt wrote:
This error also happens on my application that uses pycassa, so I don't think
this is the same bug.
I have narrowed it down to a slice between two consecutive columns. Observe
this behaviour using pycassa:
What is the size of columns? Probably those two are huge.
On Thu, Nov 8, 2012 at 4:01 AM, André Cruz andre.c...@co.sapo.pt wrote:
On Nov 7, 2012, at 12:15 PM, André Cruz andre.c...@co.sapo.pt wrote:
This error also happens on my application that uses pycassa, so I don't
think this is the
Can it be that you have tons and tons of tombstoned columns in the middle
of these two? I've seen plenty of performance issues with wide
rows littered with column tombstones (you could check with dumping the
sstables...)
Just a thought...
Josep M.
On Thu, Nov 8, 2012 at 12:23 PM, André Cruz
On Nov 7, 2012, at 2:12 AM, Chuan-Heng Hsiao hsiao.chuanh...@gmail.com wrote:
I assume you are using cassandra-cli and connecting to some specific node.
You can check the following steps:
1. Can you still reproduce this issue? (not - maybe the system/node issue)
Yes. I can reproduce this
Hi Andre,
I am just a cassandra user, the following suggestions may not be valid.
I assume you are using cassandra-cli and connecting to some specific node.
You can check the following steps:
1. Can you still reproduce this issue? (not - maybe the system/node issue)
2. What's the result when
14 matches
Mail list logo