This is not guaranteed to be safe
If the corrupted sstable has a tombstone past gc grace, and another sstable has
shadowed deleted data, removing the corrupt sstable will cause the data to come
back to life, and repair will spread it around the ring
If that’s problematic to you, you should cons
All,
We noticed that the response time jumps very high sometime. The following is
from the cassandra gc log.
[Eden: 760.0M(760.0M)->0.0B(11.2G) Survivors: 264.0M->96.0M Heap:
7657.7M(20.0G)->6893.3M(20.0G)]
Heap after GC invocations=43481 (full 0):
garbage-first heap total 20971520K, us
Yes. Move the corrupt sstable, and run a repair on this node, so that it
gets in sync with it's peers.
On Thu, Nov 2, 2017 at 6:12 PM, Shashi Yachavaram
wrote:
> We are cassandra 2.0.17 and have corrupted sstables. Ran offline
> sstablescrub but it fails with OOM. Increased the MAX_HEAP_SIZE to
Hi,
I have a problem with Cassandra 3.11.0 on Windows. I'm testing a workload w=
ith a lot of read-then-writes that had no significant problems on Cassandra=
2.x. However, now when this workload continues for a while (perhaps an hou= r),
Cassandra or its JVM effectively use up all of the mac
We are cassandra 2.0.17 and have corrupted sstables. Ran offline
sstablescrub but it fails with OOM. Increased the MAX_HEAP_SIZE to 8G it
still fails.
Can we move the corrupted sstable file and rerun sstablescrub followed by
repair.
-shashi..
Well, pretty sure they still are. at least the mutation one is. but you
should really use the dedicated metrics for this.
On 3 Nov. 2017 01:38, "Anumod Mullachery"
wrote:
> thanks ..
>
> so the dropped hints & messages are not captured in cassandra logs, post
> 3.x vs 2.x.
>
> -Anumod
>
>
>
> On
Hi,
I am trying to calculate the Read/second and Write/Second in my Cassandra
2.1 cluster. After searching and reading, I came to know about JMX bean
"org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency".
Here I can see oneMinuteRate. I have started a brand new cluster and
sta
thanks ..
so the dropped hints & messages are not captured in cassandra logs, post
3.x vs 2.x.
-Anumod
On Wed, Nov 1, 2017 at 4:50 PM, kurt greaves wrote:
> You can get dropped message statistics over JMX. for example nodetool
> tpstats has a counter for dropped hints from startup. that woul
Because in theory, corruption of your repaired dataset is possible, which
incremental repair won’t fix.
In practice pre-4.0 incremental repair has some flaws that can bring deleted
data back to life in some cases, which this would address.
You should also evaluate whether pre-4.0 incremental
Looks like a bug, could you open a jira?
> On Nov 2, 2017, at 2:08 AM, Mikhail Tsaplin wrote:
>
> Hi,
> I've upgraded Cassandra from 2.1.6 to 3.0.9 on three nodes cluster. After
> upgrade
> cqlsh shows following error when trying to run "use {keyspace};" command:
> 'ResponseFuture' object has
We have a large cluster running 2.1.19, with 3 datacenters:
- xxx: 220 nodes
- yyy: 220 nodes
- zzz: 500 nodes
The ping time between the datacenters are:
- xxx to yyy: 50 ms
- xxx to zzz: 240 ms
- yyy to zzz: 200 ms
There are some added complications such that:
- our team is managing xxx and yyy
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesWhen.html
So you means i am misleading by this statements. The full repair only needed
when node failure + replacement, or adding a datacenter. right?
At 2017-11-02 15:54:49, "kurt greaves" wrote:
Where are you se
Hi,
I've upgraded Cassandra from 2.1.6 to 3.0.9 on three nodes cluster. After
upgrade
cqlsh shows following error when trying to run "use {keyspace};" command:
'ResponseFuture' object has no attribute 'is_schema_agreed'
Actual upgrade was done on Ubuntu 16.04 by running "apt-get upgrade
cassandra"
Where are you seeing this? If your incremental repairs work properly, full
repair is only needed in certain situations, like after node failure +
replacement, or adding a datacenter.
14 matches
Mail list logo