On 7/14/2010 11:07 AM, Jonathan Ellis wrote:
socketexception means this is coming from the network, not the sstables
knowing the full error message would be nice, but just about any
problem on that end should be fixed by adding connection pooling to
your client.
(moving to user@)
On Wed, Jul 1
socketexception means this is coming from the network, not the sstables
knowing the full error message would be nice, but just about any
problem on that end should be fixed by adding connection pooling to
your client.
(moving to user@)
On Wed, Jul 14, 2010 at 5:09 AM, Thomas Downing
wrote:
> On
> As a Cassandra newbie, I'm not sure how to tell, but they are all
> to *.Data.db files, and all under the DataFileDirectory (as spec'ed
> in storage-conf.xml), which is a separate directory than the
> CommitLogDirectory. I did not see any *Index.db or *Filter.db
> files, but I may have missed th
On 7/14/2010 7:16 AM, Peter Schuller wrote:
More than one fd can be open on a given file, and many of open fd's are
on files that have been deleted. The stale fd's are all on Data.db files in
the
data directory, which I have separate from the commit log directory.
I haven't had a chance to look
> More than one fd can be open on a given file, and many of open fd's are
> on files that have been deleted. The stale fd's are all on Data.db files in
> the
> data directory, which I have separate from the commit log directory.
>
> I haven't had a chance to look at the code handling files, and I
On 7/13/2010 9:20 AM, Jonathan Ellis wrote:
On Tue, Jul 13, 2010 at 4:19 AM, Thomas Downing
wrote:
On a related note: I am running some feasibility tests looking for
high ingest rate capabilities. While testing Cassandra the problem
I've encountered is that it runs out of file handles du
On Tue, Jul 13, 2010 at 10:26 PM, Jonathan Ellis wrote:
>
> I'm totally fine with saying "Here's a JNI library for Linux [or even
> Linux version >= 2.6.X]" since that makes up 99% of our production
> deployments, and leaving the remaining 1% with the status quo.
>
You really need to say Linux >
On Tue, Jul 13, 2010 at 6:18 AM, Terje Marthinussen
wrote:
> Due to the need for doing data alignment in the application itself (you are
> bypassing all the OS magic here), there is really nothing portable about
> O_DIRECT.
I'm totally fine with saying "Here's a JNI library for Linux [or even
Lin
On Tue, Jul 13, 2010 at 4:19 AM, Thomas Downing
wrote:
> On a related note: I am running some feasibility tests looking for
> high ingest rate capabilities. While testing Cassandra the problem
> I've encountered is that it runs out of file handles during compaction.
This usually just means "inc
> Due to the need for doing data alignment in the application itself (you are
> bypassing all the OS magic here), there is really nothing portable about
> O_DIRECT. Just have a look at open(2) on linux:
[snip]
> So, just within Linux you got different mechanisms for this depending on
> kernel and
> (2) posix_fadvise() feels more obscure and less portable than
> O_DIRECT, the latter being well-understood and used by e.g. databases
> for a long time.
>
Due to the need for doing data alignment in the application itself (you are
bypassing all the OS magic here), there is really nothing portabl
> This looks relevant:
> http://chbits.blogspot.com/2010/06/lucene-and-fadvisemadvise.html (see
> comments for directions to code sample)
Thanks. That's helpful; I've been trying to avoid JNI in the past so
wasn't familiar with the API, and the main difficulty was likely to be
how to best expose t
On a related note: I am running some feasibility tests looking for
high ingest rate capabilities. While testing Cassandra the problem
I've encountered is that it runs out of file handles during compaction.
Until that event, there was no significant impact on throughput as
I was using it (0.1 que
This looks relevant:
http://chbits.blogspot.com/2010/06/lucene-and-fadvisemadvise.html (see
comments for directions to code sample)
On Fri, Jul 9, 2010 at 1:52 AM, Peter Schuller
wrote:
>> It might be worth experimenting with posix_fadvise. I don't think
>> implementing our own i/o scheduler or
> It might be worth experimenting with posix_fadvise. I don't think
> implementing our own i/o scheduler or rate-limiter would be as good a
> use of time (it sounds like you're on that page too).
Ok. And yes I mostly agree, although I can imagine circumstances where
a pretty simple rate limiter m
On Wed, Jul 7, 2010 at 4:57 PM, Peter Schuller
wrote:
>> This makes sense, but from what I have seen, read contention vs
>> cassandra is a much bigger deal than write contention
(meant "read contention vs compaction")
> I am not really concerned with write performance, but rather with
> writes a
> This makes sense, but from what I have seen, read contention vs
> cassandra is a much bigger deal than write contention (unless you
> don't have a separate device for your commitlog, but optimizing for
> that case isn't one of our goals).
I am not really concerned with write performance, but rat
This makes sense, but from what I have seen, read contention vs
cassandra is a much bigger deal than write contention (unless you
don't have a separate device for your commitlog, but optimizing for
that case isn't one of our goals).
On Wed, Jul 7, 2010 at 12:09 PM, Peter Schuller
wrote:
> Hello,
help in controlling
the performance impact of compaction operation.
-Rishi
From: Peter Schuller
To: dev@cassandra.apache.org
Sent: Wed, July 7, 2010 10:09:25 AM
Subject: Minimizing the impact of compaction on latency and throughput
Hello,
I have repeatedly
Hello,
I have repeatedly seen users report that background compaction is
overly detrimental to the behavior of the node with respect to
latency. While I have not yet deployed cassandra in a production
situation where latencies are closely monitored, these reports do not
really sound very surprisin
20 matches
Mail list logo