RE: incremental repairs with -pr flag?

2016-10-25 Thread Sean Bridges
once with incremental repair, which is what -pr intended to fix on full repair, by repairing all token ranges only once instead of times the replication factor. Cheers, Le lun. 24 oct. 2016 18:05, Sean Bridges mailto:sean.brid...@globalrelay.net>> a écrit : Hey, In the datastax documen

incremental repairs with -pr flag?

2016-10-24 Thread Sean Bridges
e -pr with incremental repairs? Thanks, Sean [1] https://docs.datastax.com/en/cassandra/3.x/cassandra/operations/opsRepairNodesManualRepair.html -- Sean Bridges senior systems architect Global Relay _sean.bridges@globalrelay.net_ <mailto:sean.brid...@globalrelay.net> *866.484.6630 * Ne

Re: non incremental repairs with cassandra 2.2+

2016-10-19 Thread Sean Bridges
d and others marked as unrepaired, which will never be compacted together. You might want to flag all sstables as unrepaired before moving on, if you do not intend to switch to incremental repair for now. Cheers, On Wed, Oct 19, 2016 at 6:31 PM Sean Bridges mailto:sean.brid...@globalrelay.net>

non incremental repairs with cassandra 2.2+

2016-10-19 Thread Sean Bridges
Hey, We are upgrading from cassandra 2.1 to cassandra 2.2. With cassandra 2.1 we would periodically repair all nodes, using the -pr flag. With cassandra 2.2, the same repair takes a very long time, as cassandra does an anti compaction after the repair. This anti compaction causes most (all

does DC_LOCAL require manually truncating system.paxos on failover?

2015-04-02 Thread Sean Bridges
We are using lightweight transactions, two datacenters and DC_LOCAL consistency level. There is a comment in CASSANDRA-5797, "This would require manually truncating system.paxos when failing over." Is that required? I don't see it documented anywhere else. Thanks, Sean https://issues.apache.

Re: are repairs in 2.0 more expensive than in 1.2

2014-10-24 Thread Sean Bridges
> On Thu, Oct 23, 2014 at 9:33 AM, Sean Bridges > wrote: > >> The change from parallel to sequential is very dramatic. For a small >> cluster with 3 nodes, using cassandra 2.0.10, a parallel repair takes 2 >> hours, and io throughput peaks at 6 mb/s. Sequential repai

Re: are repairs in 2.0 more expensive than in 1.2

2014-10-23 Thread Sean Bridges
repair takes 40 hours, with average io around 27 mb/s. Should I file a jira? Sean On Wed, Oct 15, 2014 at 9:23 PM, Sean Bridges wrote: > Thanks Robert. Does the switch to sequential from parallel explain why IO > increases, we see significantly higher IO with 2.10. > > The node

Re: are all files for an sstable immutable?

2014-10-16 Thread Sean Bridges
cularly important, they're primarily an > optimization for startup time. > > On Thu, Oct 16, 2014 at 12:20 PM, Sean Bridges > wrote: > >> Hello, >> >> I thought an sstable was immutable once written to disk. Before >> upgrading from 1.2.18 to 2.0.10 we took a

are all files for an sstable immutable?

2014-10-16 Thread Sean Bridges
Hello, I thought an sstable was immutable once written to disk. Before upgrading from 1.2.18 to 2.0.10 we took a snapshot of our sstables. Now when I compare the files in the snaphot dir and the original files, the Summary.db files have a newer modified date, and the file sizes have changed. Th

Re: are repairs in 2.0 more expensive than in 1.2

2014-10-15 Thread Sean Bridges
other replicas, because at least one replica in the snapshot is not undergoing repair." Sean [1] http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsRepair.html On Wed, Oct 15, 2014 at 5:36 PM, Robert Coli wrote: > On Wed, Oct 15, 2014 at 4:54 PM, Sean Bridges >

are repairs in 2.0 more expensive than in 1.2

2014-10-15 Thread Sean Bridges
Hello, We upgraded a cassandra cluster from 1.2.18 to 2.0.10, and it looks like repair is significantly more expensive now. Is this expected? We schedule rolling repairs through the cluster. With 1.2.18 a repair would take 3 hours or so. The first repair after the upgrade has been going on for

Re: Consistency model

2011-04-17 Thread Sean Bridges
t, because Cassandra doesn't wait until repair writes >> are acked before the answer is returned. This is something we can fix. >> >> On Sun, Apr 17, 2011 at 12:05 AM, Sean Bridges >> wrote: >> > Tyler, your answer seems to contradict this email by Jonathan Ell

Re: Consistency model

2011-04-17 Thread Sean Bridges
is something we can fix. > > On Sun, Apr 17, 2011 at 12:05 AM, Sean Bridges wrote: >> Tyler, your answer seems to contradict this email by Jonathan Ellis >> [1].  In it Jonathan says, >> >> "The important guarantee this gives you is that once one quorum read >

Re: Consistency model

2011-04-16 Thread Sean Bridges
Tyler, your answer seems to contradict this email by Jonathan Ellis [1]. In it Jonathan says, "The important guarantee this gives you is that once one quorum read sees the new value, all others will too. You can't see the newest version, then see an older version on a subsequent write [sic, I a

Re: Consistency model

2011-04-16 Thread Sean Bridges
If you are reading and writing at quorum, then what you are seeing shouldn't happen. You shouldn't be able to read N+1 until N+1 has been committed to a quorum of servers. At this point you should not be able to read N anymore, since there is no quorum that contains N. Dan - I think you are righ

Re: 10 minute cassandra pause

2010-06-23 Thread Sean Bridges
23, 2010 at 2:26 PM, Sean Bridges wrote: >> We were running a load test against a single 0.6.2 cassandra node.  24 >> hours into the test,  Cassandra appeared to be nearly frozen for 10 >> minutes.  Our write rate went to almost 0, and we had a large number >> of write time

10 minute cassandra pause

2010-06-23 Thread Sean Bridges
We were running a load test against a single 0.6.2 cassandra node. 24 hours into the test, Cassandra appeared to be nearly frozen for 10 minutes. Our write rate went to almost 0, and we had a large number of write timeouts. We weren't swapping or gc'ing at the time. It looks like the problems

Re: using more than 50% of disk space

2010-05-27 Thread Sean Bridges
12:00 AM, gabriele renzi wrote: > On Wed, May 26, 2010 at 8:00 PM, Sean Bridges > wrote: > > So after CASSANDRA-579, anti compaction won't be done on the source node, > > and we can use more than 50% of the disk space if we use multiple column > > families? > >

Re: using more than 50% of disk space

2010-05-26 Thread Sean Bridges
for some > background here: I was just about to start working on this one, but it won't > make it in until 0.7. > > > -Original Message- > From: "Sean Bridges" > Sent: Wednesday, May 26, 2010 11:50am > To: user@cassandra.apache.org > Subject: using m

using more than 50% of disk space

2010-05-26 Thread Sean Bridges
We're investigating Cassandra, and we are looking for a way to get Cassandra use more than 50% of it's data disks. Is this possible? For major compactions, it looks like we can use more than 50% of the disk if we use multiple similarly sized column families. If we had 10 column families of the s