> Of course. I wasn't thinking clearly.
>
> So, back to a previous point you brought up, I will have heavy reads and
> even heavier writes. How would you rate the benefits of flashcache in
> such a scenario? Is it still an overall performance boost worth the
> expense?
We have also heavy r
On 7/18/2011 1:20 PM, Héctor Izquierdo Seliva wrote:
If using the version that has both rt and wt caches, is it just the wt
cache that's polluted for compactions/flushes? If not, why does the rt
cache also get polluted?
As I said, all reads go through flashcache, so if you read three 10 GB
sst
>
> If using the version that has both rt and wt caches, is it just the wt
> cache that's polluted for compactions/flushes? If not, why does the rt
> cache also get polluted?
>
As I said, all reads go through flashcache, so if you read three 10 GB
sstables for a compaction you will get those
On 7/18/2011 12:08 PM, Héctor Izquierdo Seliva wrote:
Interesting. So, there is no segregation between read and write cache
space? A compaction or flush can evict blocks in the read cache if it
needs the space for write buffering?
There are two versions, the -wt (write through) that will cache
> Interesting. So, there is no segregation between read and write cache
> space? A compaction or flush can evict blocks in the read cache if it
> needs the space for write buffering?
There are two versions, the -wt (write through) that will cache also
what is written, and the normal version t
On 7/18/2011 4:14 AM, Héctor Izquierdo Seliva wrote:
Hector, some before/after numbers would be great if you can find them.
Thanks!
I'll try and get some for you :)
What happens when your cache gets trashed? Do compactions and flushes
go slower?
If you use flashcache-wt flushed and compact
>
> Hector, some before/after numbers would be great if you can find them.
> Thanks!
>
I'll try and get some for you :)
> What happens when your cache gets trashed? Do compactions and flushes
> go slower?
>
If you use flashcache-wt flushed and compacted sstables will go to the
cache.
A
On 7/17/2011 12:29 PM, Héctor Izquierdo Seliva wrote:
I've been using flashcache for a while in production. It improves read
performance and latency was halved by a good chunk, though I don't
remember the exact numbers.
Problems: compactions will trash your cache, and so will memtable
flushes. R
I've been using flashcache for a while in production. It improves read
performance and latency was halved by a good chunk, though I don't
remember the exact numbers.
Problems: compactions will trash your cache, and so will memtable
flushes. Right now there's no way to avoid that.
If you want, I
On 7/12/2011 9:02 PM, Peter Schuller wrote:
Thanks Peter, but... hmmm, are you saying that even after a cache miss which
results in a disk read and blocks being moved to the ssd, that by the next
cache miss for the same data and subsequent same file blocks, that the ssd
is unlikely to have those
> Thanks Peter, but... hmmm, are you saying that even after a cache miss which
> results in a disk read and blocks being moved to the ssd, that by the next
> cache miss for the same data and subsequent same file blocks, that the ssd
> is unlikely to have those same blocks present anymore?
I am say
On 7/12/2011 10:19 AM, Peter Schuller wrote:
Do any Cass developers have any thoughts on this and whether or not it would
be helpful considering Cass' architecture and operation?
A well-functioning L2 cache should definitely be very useful with
Cassandra for read-intensive workloads where the re
> Do any Cass developers have any thoughts on this and whether or not it would
> be helpful considering Cass' architecture and operation?
A well-functioning L2 cache should definitely be very useful with
Cassandra for read-intensive workloads where the request distribution
is such that additional
With big data requirements pressuring me to pack up to a terabyte on one
node, I suspect that even 32 GB of RAM just will not be large enough for
Cass' various memory caches to be effective. 32/1000 is a tiny working
set to data store ratio... even assuming non-random reads. So, I'm
investiga
14 matches
Mail list logo