Any comments on exceptions related to unfinished compactions on Cassandra start
up? Best way to deal with them? Side effects of deleting
compactions_in_progress folder to resolve the issue?
Thanks
Anuj Wadehra
Sent from Yahoo Mail on Android
From:Anuj Wadehra anujw_2...@yahoo.co.in
On Sun, Apr 12, 2015 at 12:02 PM, Anuj Wadehra anujw_2...@yahoo.co.in
wrote:
Often we face errors on Cassandra start regarding unfinished compactions
particularly when cassandra was abrupty shut down . Problem gets resolved
when we delete /var/lib/cassandra/data/system/compactions_in_progress
Unfortunately, I’ve switched email systems and don’t have my emails from that
time period. I did not file a Jira, and I don’t remember who made the patch for
me or if he filed a Jira on my behalf.
I vaguely recall seeing the fix in the Cassandra change logs, but I just went
and read them and I
On Mon, Apr 13, 2015 at 10:52 AM, Anuj Wadehra anujw_2...@yahoo.co.in
wrote:
Any comments on side effects of Major compaction especially when sstable
generated is 100+ GB?
I have no idea how this interacts with the automatic compaction stuff; if
you find out, let us know?
But if you want to
Any comments on side effects of Major compaction especially when sstable
generated is 100+ GB?
After Cassandra 1.2 , automated tombstone compaction occurs even on a single
sstable if tombstone percentage increases the tombstone_threshold sub property
specified in compaction strategy. So,
Rob,
Does that mean once you split it back into small ones, automatic compaction a
will continue to happen on a more frequent basis now that it's no longer a
single large monolith?
Rahul
On Apr 13, 2015, at 3:23 PM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Apr 13, 2015 at 10:52 AM,
On Mon, Apr 13, 2015 at 12:26 PM, Rahul Neelakantan ra...@rahul.be wrote:
Does that mean once you split it back into small ones, automatic
compaction a will continue to happen on a more frequent basis now that it's
no longer a single large monolith?
That's what the word size tiered means in
What about incremental repair and sequential repair?
I ran nodetool repair -- keyspace table on one node. I found the repair
sessions running on different nodes. Will this command repair the whole
table?
In this page:
On Mon, Apr 13, 2015 at 1:36 PM, Benyi Wang bewang.t...@gmail.com wrote:
- I need to run compaction one each node,
In general, there is no requirement to manually run compaction. Minor
compaction occurs in the background, automatically.
- To repair a table (column family), I only
I read the document for several times, but I still not quite sure how to
run repair and compaction.
To my understanding,
- I need to run compaction one each node,
- To repair a table (column family), I only need to run repair on any of
nodes.
Am I right?
Thanks.
On Mon, Apr 13, 2015 at 3:33 PM, Jeff Ferland j...@tubularlabs.com wrote:
Nodetool repair -par: covers all nodes, computes merkle trees for each
node at the same time. Much higher IO load as every copy of a key range is
scanned at once. Can be totally OK with SSDs and throughput limits. Only
Or use spotify’s reaper and forget about it
https://github.com/spotify/cassandra-reaper
https://github.com/spotify/cassandra-reaper
On Apr 13, 2015, at 3:45 PM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Apr 13, 2015 at 3:33 PM, Jeff Ferland j...@tubularlabs.com
Hi guys,
We have recently added two datacenters to our existing 2.0.6 cluster. We
followed the process here pretty much exactly:
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_add_dc_to_cluster_t.html
We are using GossipingPropertyFileSnitch and NetworkTopologyStrategy across
Hello,
I was trying to find what protocol versions are supported in Cassandara
2.0.14 and after reading multiple links i am very very confused.
Please correct me if my understanding is correct:
- Binary Protocol version and CQL Spec version are different ?
- Cassandra 2.0.x supports CQL 3
Yes it will look in each sstable that according to the bloom filter may have
data for that partition key and use time stamps to figure out the latest
version (or none in case of newer tombstone) to return for each clustering key
Sent from my iPhone
On Apr 12, 2015, at 11:18 PM, Anishek
Did the original patch make it into upstream? That's unclear. If so, what
was the JIRA #? Have you filed a JIRA for the new problem?
On Mon, Apr 13, 2015 at 12:21 PM, Robert Wille rwi...@fold3.com wrote:
Back in 2.0.4 or 2.0.5 I ran into a problem with delete-only workloads. If
I did lots of
Back in 2.0.4 or 2.0.5 I ran into a problem with delete-only workloads. If I
did lots of deletes and no upserts, Cassandra would report that the memtable
was 0 bytes because an accounting error. The memtable would never flush and
Cassandra would eventually die. Someone was kind enough to create
17 matches
Mail list logo