, but it sounds
promising). You could probably do ok on Solaris, too, with a custom Snappy jar
and some JNA concessions.
- .Dustin
On Sep 5, 2012, at 10:36 PM, Rob Coli rc...@palominodb.com wrote:
On Sun, Jul 29, 2012 at 7:40 PM, Dustin Wenz dustinw...@ebureau.com wrote:
We've just set
alone makes this cluster configuration unsuitable for production use.
- .Dustin
On Jul 30, 2012, at 2:04 PM, Dustin Wenz dustinw...@ebureau.com wrote:
Thanks for the pointer! It sounds likely that's what I'm seeing. CFStats
reports that the bloom filter size is currently several
is
carrying SSD disks.
Again you have to keep your bloom filters in java heap memory so and
design that tries to create a quatrillion small rows is going to have
memory issues as well.
On Sun, Jul 29, 2012 at 10:40 PM, Dustin Wenz dustinw...@ebureau.com wrote:
I'm trying to determine if there are any
I'm trying to determine if there are any practical limits on the amount of data
that a single node can handle efficiently, and if so, whether I've hit that
limit or not.
We've just set up a new 7-node cluster with Cassandra 1.1.2 running under
OpenJDK6. Each node is 12-core Xeon with 24GB of
, at 7:39 AM, Dustin Wenz wrote:
We recently increased the replication factor of a keyspace in our
cassandra 1.1.1 cluster from 2 to 4. This was done by setting the
replication factor to 4 in cassandra-cli, and then running a repair on
each node.
Everything seems to have worked
that all node
schemas are consistent.
Are there any other ways that I could potentially force Cassandra to accept
these changes?
- .Dustin
On Jul 13, 2012, at 10:02 AM, Dustin Wenz wrote:
It sounds plausible that is what we are running into. All of our nodes report
We recently increased the replication factor of a keyspace in our cassandra
1.1.1 cluster from 2 to 4. This was done by setting the replication factor to 4
in cassandra-cli, and then running a repair on each node.
Everything seems to have worked; the commands completed successfully and disk
significant time without it
being reported?
- .Dustin
On Jun 27, 2012, at 1:31 AM, Igor wrote:
Hello
Too much GC? Check JVM heap settings and real usage.
On 06/27/2012 01:37 AM, Dustin Wenz wrote:
We occasionally see fairly poor compaction performance on random nodes in
our 7
We occasionally see fairly poor compaction performance on random nodes in our
7-node cluster, and I have no idea why. This is one example from the log:
[CompactionExecutor:45] 2012-06-26 13:40:18,721 CompactionTask.java
(line 221) Compacted to
We observed a JRE crash on one node in a seven node cluster about a half hour
after upgrading to version 1.1.1 yesterday. Immediately after the upgrade,
everything seemed to be working fine. The last item in the cassandra log was a
info-level notification that compaction had started on a data
10 matches
Mail list logo