Why do you need another CF? Is there something wrong with repeating the key
as a column and indexing it?
On Fri, Jul 22, 2011 at 7:40 PM, Patrick Julien pjul...@gmail.com wrote:
Exactly. In any case, I just answered my own question. If I need
range, I can just make another column family
Hi, Please help me with my problem. For better performance i turn off
compaction and run massive inserts, after database reach 37GB i stop massive
inserts and start compaction with NodeTool compaction Keyspace CFamily.
after half hour of work cassandra fall with error Out of memory i give
1500M
Remember the cli uses microsecond precision . so if your app is not using
the same precision weird this will result in clients writing the biggest
timsetamp winning the final value.
On Saturday, July 23, 2011, Jonathan Ellis jbel...@gmail.com wrote:
You must have given it a delete timestamp in
On Sunday, July 24, 2011, lebron james lebron.m...@gmail.com wrote:
Hi, Please help me with my problem. For better performance i turn off
compaction and run massive inserts, after database reach 37GB i stop massive
inserts and start compaction with NodeTool compaction Keyspace CFamily.
after
From experience with similar-sized data sets, 1.5GB may be too little.
Recently I bumped our java HEAP limit from 3GB to 4GB to get past an OOM doing
a major compaction.
Check nodetool -h localhost info while the compaction is running for a simple
view into the memory state.
If you can,
Jonathan,
Are you sure that the reads done for compaction are sequential with Cassandra
0.6.13? This is not what I am observing right now. During a minor compaction
I usually observe ~ 1500 to 1900 r/s while rMB/s is barely around 30 to 35MB/s.
Just asking out of curiosity.
FR
Do I need to install Tomcat? Maybe that is the problem...
On Sat, Jul 23, 2011 at 12:03 PM, Jean-Nicolas Boulay Desjardins
jnbdzjn...@gmail.com wrote:
How is it called in the ps?
Because I have a strong feeling that it tries to load... But for some
reason that is be on me it does not.
Is
It's sequential per-sstable. If you are compacting a lot of sstables
how closely this approximates completely sequential will
deteriorate.
On Sun, Jul 24, 2011 at 1:18 PM, Francois Richard frich...@xobni.com wrote:
Jonathan,
Are you sure that the reads done for compaction are sequential with
Restarting the service will drop all the memmapped caches, cassandra caches are
saved / persistent and you can also use memcachd if you want.
Are you experiencing stop the world pauses? There are some things that can be
done to reduce the chance of them happening.
Cheers
-
Quick reminder, with RF == 2 the QUORUM is 2 as well. So when using
LOCAL_QUORUM with RF 2+2 you will effectively be using LOCAL_ALL which may not
be what you want. As De La Soul sang, 3 is the magic number for minimum fault
tolerance (QUORUM is then 2).
Cheers
-
Aaron
my fall-back approach is, since A and B do not change a lot, I'll
pre-generate the join of A and B (not very large) keyed on A.id +
B.id,
then do the get(a+b)
+1 materialise views / joins you know you want ahead of time. Trade space for
time.
Cheers
-
Aaron Morton
hi all,
I'm launching a cassandra cluster with 30 nodes. I wonder whether the
inconsistency of host clocks will influence the performance of cluster.
Thanks!
Hello,
We are running Cassandra 0.7.5 on a 15 node cluster, RF=3. We are having a
problem where some commit logs do not get deleted. Our write load generates
a new commit log about every two to three minutes. On average, one commit
log an hour is not deleted. Without draining, deleting the
You should always sync your host clocks. Clients provide timestamp but for
the server gc_grace and ttl columns can have issues if server clocks are not
correct.
On Sunday, July 24, 2011, 魏金仙 sei_...@126.com wrote:
hi all,
I'm launching a cassandra cluster with 30 nodes. I wonder whether the
14 matches
Mail list logo