, December 04, 2009 9:14 PM
To: cassandra-user@incubator.apache.org
Subject: Re: Persistently increasing read latency
On Fri, Dec 4, 2009 at 10:40 PM, Thorsten von Eicken t...@rightscale.com
wrote:
For the first few hours of my load test, I have enough I/O. The problem
is that Cassandra is spending
: Re: Persistently increasing read latency
Thanks for looking into this. Doesn't seem like there's much
low-hanging fruit to make compaction faster but I'll keep that in the
back of my mind.
-Jonathan
On Thu, Dec 3, 2009 at 4:58 PM, Freeman, Tim tim.free...@hp.com wrote:
So
: Thursday, December 03, 2009 2:45 PM
To: cassandra-user@incubator.apache.org
Subject: Re: Persistently increasing read latency
On Thu, Dec 3, 2009 at 4:34 PM, Freeman, Tim tim.free...@hp.com wrote:
Can you tell if the system is i/o or cpu bound during compaction?
It's I/O bound
On Fri, Dec 4, 2009 at 1:31 PM, Freeman, Tim tim.free...@hp.com wrote:
Fundamentally there's only so much I/O you can do at a time. If you
don't have enough, you need to upgrade to servers with better i/o
(i.e. not EC2: http://pl.atyp.us/wordpress/?p=2240cpage=1) and/or
more ram to cache the
procs ---memory-- ---swap-- -io --system--
-cpu--
r b swpd free buff cache si sobibo in cs us sy id wa st
5 1 29576 88400 27656 137977841046 13401 8 1 91 1
0
performance is definitely better now that caching
Jonathan Ellis wrote:
On Fri, Dec 4, 2009 at 1:31 PM, Freeman, Tim tim.free...@hp.com wrote:
Fundamentally there's only so much I/O you can do at a time. If you
don't have enough, you need to upgrade to servers with better i/o
(i.e. not EC2: http://pl.atyp.us/wordpress/?p=2240cpage=1)
On Fri, Dec 4, 2009 at 10:40 PM, Thorsten von Eicken t...@rightscale.com
wrote:
For the first few hours of my load test, I have enough I/O. The problem
is that Cassandra is spending too much I/O on reads and writes and too
little on compactions to function well in the long term.
If you
increasing read latency
1) use jconsole to see what is happening to jvm / cassandra internals.
possibly you are slowly exceeding cassandra's ability to keep up with
writes, causing the jvm to spend more and more effort GCing to find
enough memory to keep going
2) you should be at least on 0.4.2
i am seeing this as well. i did a test with just 1 cassandra node,
ReplicationFactor=1, 'get' ConsistencyLevel.ONE, 'put'
ConsistencyLevel.QUORUM. The first test was writing and reading random
values starting from a fresh database. The put performance is staying
reasonabe, but the read
increasing read latency
1) use jconsole to see what is happening to jvm / cassandra internals.
possibly you are slowly exceeding cassandra's ability to keep up with
writes, causing the jvm to spend more and more effort GCing to find
enough memory to keep going
2) you should be at least on 0.4.2
To: cassandra-user@incubator.apache.org
Subject: Re: Persistently increasing read latency
1) use jconsole to see what is happening to jvm / cassandra internals.
possibly you are slowly exceeding cassandra's ability to keep up with
writes, causing the jvm to spend more and more effort GCing to find
enough
, December 03, 2009 11:02 AM
To: cassandra-user@incubator.apache.org
Subject: Re: Persistently increasing read latency
i would expect read latency to increase linearly w/ the number of
sstables you have around. how many are in your data directories? is
your compaction lagging 1000s of tables behind
On Thu, Dec 3, 2009 at 1:04 PM, Chris Goffinet goffi...@digg.com wrote:
We've seen reads spike like this with a large number of SSTables.
(Which I emphasize is a natural consequence of the SSTable design.)
On Thu, Dec 3, 2009 at 1:11 PM, Freeman, Tim tim.free...@hp.com wrote:
how many are in your data directories? is your compaction
lagging 1000s of tables behind again?
Yes, there are 2348 files in data/Keyspace1, and jconsole says the compaction
pool has 1600 pending tasks.
If you stop doing
: Thursday, December 03, 2009 11:02 AM
To: cassandra-user@incubator.apache.org
Subject: Re: Persistently increasing read latency
i would expect read latency to increase linearly w/ the number of
sstables you have around. how many are in your data directories? is
your compaction lagging 1000s of tables
i do not have any pending tasks in the compaction pool but i have 1164
files in my data directory. one thing to note about my situation is
that i did run out of disk space during my test. cassandra _seemed_ to
recover nicely.
tim, is your's recovering? i plan to rerun the test tonight with a
On Thu, Dec 3, 2009 at 1:34 PM, Jonathan Ellis jbel...@gmail.com wrote:
On Thu, Dec 3, 2009 at 1:32 PM, B. Todd Burruss bburr...@real.com wrote:
i do not have any pending tasks in the compaction pool but i have 1164
files in my data directory.
how many CFs are those spread across?
... and
files with Data in their name = 384
files with Compacted in their name = 11
one CF, my keyspace is like this:
Keyspaces
Keyspace Name=uds
KeysCachedFraction0.01/KeysCachedFraction
ColumnFamily CompareWith=BytesType Name=bucket /
/Keyspace
/Keyspaces
i've also used
nothing - i have DEBUG level set and bounced the server. i'll restart
again.
cassandra spends a lot of time loading INDEXes
On Thu, 2009-12-03 at 14:08 -0600, Jonathan Ellis wrote:
what does it log when you nodeprobe compact, with debug logging on?
On Thu, Dec 3, 2009 at 1:59 PM, B. Todd
hmm.
doesn't that leave the trunk in a bad position in terms of new development?
you may go through times when a major feature lands and trunk is broken/buggy.
or are you planning on building new features on a branch and then merging into
trunk when it's stable?
On Dec 3, 2009, at 5:32 AM,
I'm only reporting what trunk is like right now, not what it will be
in the future. Trunk has been buggy before and will be again, don't
worry. :)
On Wed, Dec 2, 2009 at 2:57 PM, Ian Holsman i...@holsman.net wrote:
hmm.
doesn't that leave the trunk in a bad position in terms of new
1) use jconsole to see what is happening to jvm / cassandra internals.
possibly you are slowly exceeding cassandra's ability to keep up with
writes, causing the jvm to spend more and more effort GCing to find
enough memory to keep going
2) you should be at least on 0.4.2 and preferably trunk if
22 matches
Mail list logo