- It remains 0 with MEMTable and after flushing the MEMTable.
2. The column family is configured to run with row-cache and key-cache
and although I am reading the same row over and over the row-cache
size/requests remains 0. The key-cache size/requests attributes are
changed.
Why
At the Cassandra 2013 conference, Axel Liljencrantz from Spotify discussed
various cassandra gotchas in his talk on "How Not to Use Cassandra." One of the
sections of his talk was on the row cache. If you weren't at the talk, or don't
remember it, the video is up on youtube
Here's my understanding of things ... (this applies only for the regular heap
implementation of row cache)
> Why Cassandra does not cache a row that was requested few times?
What does the cache capacity read. Is it > 0?
> What the ReadCount attribute in ColumnFamilies indica
Hi,
The row cache capacity > 0.
after reading a row - the Caches..KeyCache.Requests attribute
gets incremented but the ColumnFamilies...ReadCount attribute
remains zero and the Caches..RowCache.Size and Requsts
attributes remain zero as well.
It looks like the row-cache is disabled altho
> Hi,
>
> The row cache capacity > 0.
>
>
> after reading a row - the Caches..KeyCache.Requests attribute
> gets incremented but the ColumnFamilies...ReadCount attribute
> remains zero and the Caches..RowCache.Size and Requsts
> attributes remain zero as well.
>
Hi,
The JConsole shows that the capacity > 0.
10x
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Row-cache-tp6532887p6549420.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Hi,
I am running a 15 node cluster ,version 0.6.8, Linux 64bit OS, using
mmap I/O, 6GB ram allocated. I have row cache enabled to 8 keys
(mean row size is 2KB). I am observing a strange behaviour.. I query for
1.6 Million rows across the cluster and time taken is around 40 mins , I
query the
On Wed, Aug 14, 2013 at 10:56 PM, Faraaz Sareshwala <
fsareshw...@quantcast.com> wrote:
>
>- All writes invalidate the entire row (updates thrown out the cached
>row)
>
> This is not correct. Writes are added to the row, if it is in the row
cache. If it's not in
If you are using off-heap memory for row cache, "all writes invalidate the
entire row" should be correct.
Boris
On Fri, Aug 23, 2013 at 8:32 AM, Robert Coli wrote:
> On Wed, Aug 14, 2013 at 10:56 PM, Faraaz Sareshwala <
> fsareshw...@quantcast.com> wrote:
>
>>
After a bit of searching, I think I've found the answer I've been looking for.
I guess I didn't search hard enough before sending out this email. Thank you
all for the responses.
According to the datastax documentation [1], there are two types of row cache
providers:
row
On Thu, Aug 22, 2013 at 7:53 PM, Faraaz Sareshwala <
fsareshw...@quantcast.com> wrote:
> According to the datastax documentation [1], there are two types of row
> cache providers:
>
...
> The off-heap row cache provider does indeed invalidate rows. We're going
&
data or CQL tables whose compound keys create wide rows under the
hood.
Bill
On 2013/08/23 17:30, Robert Coli wrote:
On Thu, Aug 22, 2013 at 7:53 PM, Faraaz Sareshwala
mailto:fsareshw...@quantcast.com>> wrote:
According to the datastax documentation [1], there are two types of
row cac
It is my understanding that row cache is on the memory (Not on disk). It could
live on heap or native memory depending on the cache provider? Is that right?
-SC
> Date: Fri, 23 Aug 2013 18:58:07 +0100
> From: b...@dehora.net
> To: user@cassandra.apache.org
> Subject: Re: row
Yes, that is correct.
The SerializingCacheProvider stores row cache contents off heap. I believe you
need JNA enabled for this though. Someone please correct me if I am wrong here.
The ConcurrentLinkedHashCacheProvider stores row cache contents on the java heap
itself.
Each cache provider has
On 09/01/2013 03:06 PM, Faraaz Sareshwala wrote:
Yes, that is correct.
The SerializingCacheProvider stores row cache contents off heap. I believe you
need JNA enabled for this though. Someone please correct me if I am wrong here.
The ConcurrentLinkedHashCacheProvider stores row cache contents
Thank you all for your valuable comments and information.
-SC
> Date: Tue, 3 Sep 2013 12:01:59 -0400
> From: chris.burrou...@gmail.com
> To: user@cassandra.apache.org
> CC: fsareshw...@quantcast.com
> Subject: Re: row cache
>
> On 09/01/2013 03:06 PM, Faraaz Sareshwala wr
I have found row cache to be more trouble then bene.
The term fools gold comes to mind.
Using key cache and leaving more free main memory seems stable and does not
have as many complications.
On Wednesday, September 4, 2013, S C wrote:
> Thank you all for your valuable comments and informat
I agree. We've had similar experience.
Sent from my iPhone
On Sep 7, 2013, at 6:05 PM, Edward Capriolo wrote:
> I have found row cache to be more trouble then bene.
>
> The term fools gold comes to mind.
>
> Using key cache and leaving more free main memory seems stable
Hi,
I'm new to Cassandra and trying to get a better understanding on how the
row cache can be tuned to optimize the performance.
I came across think this article:
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsConfiguringCaches.html
And it suggests not to even touc
I saw in the Riptano "Tuning Cassandra" slide deck that the row cache can be
detrimental if there are a lot of updates to the cached row. Is this because
the cache is not write through, and every update necessitates creation of a
new row?
I see there is an open issue:
https://issues.
Hello.
I recently was having some timeout issues while updating counters and turned on
row cache for that particular CF. This is its stats:
Column Family: UserQuotas
SSTable count: 3
Space used (live): 2687239
Space used (total
Hi,
We have some issue having a high read throughput. I wanted to alleviate
things by turning the row cache ON.
I set the row cache to 200 on one node and enable caching 'ALL' on the 3
most read CF. There is the effect this operation had on my JVM:
http://img692.imageshack.us/i
Hi all - or rather devs
we have been working on an alternative implementation to the existing row
cache(s)
We have 2 main goals:
- Decrease memory -> get more rows in the cache without suffering a huge
performance penalty
- Reduce gc pressure
This sounds a lot like we should be using the
the way i understand how row caches work is that each node has an
independent cache, in that they do not push there cache contents with other
nodes. if that the case is it also true that when a new node is added to
the cluster it has to build up its own cache. if thats the case i see that
as a po
Hi,
we have a couple of use cases with wide rows with a small portion of hot data
in them.
Example:
Chatlog:
{
$userid1-$userid2: [{timestamp: message}, {timestamp: message} ...]
}
People tend to check only the most recent pages. So while the current row cache
doesn't work
does the cache size change between 2nd and 3rd time?
On Thu, Jan 13, 2011 at 10:47 AM, Saket Joshi wrote:
> Hi,
>
> I am running a 15 node cluster ,version 0.6.8, Linux 64bit OS, using mmap
> I/O, 6GB ram allocated. I have row cache enabled to 8 keys (mean row
> size is 2KB).
Yes it does change.
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Thursday, January 13, 2011 11:01 AM
To: user
Subject: Re: cassandra row cache
does the cache size change between 2nd and 3rd time?
On Thu, Jan 13, 2011 at 10:47 AM, Saket Joshi
wrote:
>
I'm not sure if this is entirely true, but I *think* older version of
cassandra used a version of the ConcurrentLinkedHashmap (which backs
the row cache) that used the Second Chance algorithm, rather than LRU,
which might explain this non-LRU-like behavior. I may be entirely
wrong about
On 01/13/2011 02:05 PM, Saket Joshi wrote:
> Yes it does change.
>
So the confusing part for me is why a cache of size 80,000 would not be
fill after 1,600,000 requests. Can you observe items cached and hit
rate while making the first 1.6 million row query?
The cache is 800,000 per node , I have 15 nodes in the cluster. I see the cache
value increased after the first run, the row cache hit rate was 0 for first
run. For second run of the same data , the hit rate increased to 30% but on the
third it jumps to 99%
-Saket
-Original Message
, Saket Joshi wrote:
> The cache is 800,000 per node , I have 15 nodes in the cluster. I see the
> cache value increased after the first run, the row cache hit rate was 0 for
> first run. For second run of the same data , the hit rate increased to 30%
> but on the third it
On Thu, Jan 13, 2011 at 2:00 PM, Edward Capriolo wrote:
> Is it possible that your are reading at READ.ONE and that READ.ONE
> only warms cache on 1 of your three nodes= 20. 2nd read warms another
> 60%, and by the third read all the replicas are warm? 99% ?
>
> This would be true if digest reads
Digest reads could be being dropped..?
On Thu, Jan 13, 2011 at 4:11 PM, Jonathan Ellis wrote:
> On Thu, Jan 13, 2011 at 2:00 PM, Edward Capriolo
> wrote:
> > Is it possible that your are reading at READ.ONE and that READ.ONE
> > only warms cache on 1 of your three nodes= 20. 2nd read warms anot
That's possible, yes. He'd want to make sure there aren't any of
those WARN messages in the logs.
On Fri, Jan 14, 2011 at 11:46 AM, Mike Malone wrote:
> Digest reads could be being dropped..?
>
> On Thu, Jan 13, 2011 at 4:11 PM, Jonathan Ellis wrote:
>>
>> On Thu, Jan 13, 2011 at 2:00 PM, Edwar
hi,
I configured my server with a row_cache_size_in_mb : 1920
When started the server and checked the JMX it shows the capacity is
set to 1024MB .
I investigated further and found that the version of
concurrentlruhashmap used is 1.2 which sets capacity max value to 1GB.
So, in cassandra 1.1 th
I'm running one Cassandra node -version 1.2.6- and I *enabled* the *row
cache* with *1GB*.
But looking the Cassandra metrics on JConsole, *Row Cache Requests* are
very *low* after a high number of queries (about 12 requests).
RowCache metrics:
*Capacity: 1GB*
*Entries: 3
*
*HitRate:
28
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
Have set up the C* nodes with
row_cache_size_in_mb: 1024
row_cache_save_period: 14400
and I am making this query
select svc_pt_id, meas_type_id, read_time, value FROM
cts_svc_pt_latest_int_read where svc_pt_id
Hi,
In 99% of use cases Cassandra's row cache is not something you should look
into. Leveraging page cache yields good results and if accounted for can
provide you with performance increase on read side.
I'm not a fan of a default row cache implementation and its invalidation
mechanism
Thanks, Matija! That was insightful.
I don't really have a use case in particular, however, what I'm trying to
do is to figure out how the Cassandra performance can be leveraged by using
different caching mechanisms, such as row cache, key cache, partition
summary etc. Of course, it
I don't really have a use case in particular, however, what I'm trying to
> do is to figure out how the Cassandra performance can be leveraged by using
> different caching mechanisms, such as row cache, key cache, partition
> summary etc. Of course, it will also heavily depend
I see. Thanks, Arvydas!
In terms of eviction policy in the row cache, does a write operation
invalidates only the row(s) which are going be modified or the whole
partition? In older version of Cassandra, I believe the whole partition
gets invalidated even if only one row is modified. Is that
:32 +0530 preetika tyagi
<preetikaty...@gmail.com> wrote
I see. Thanks, Arvydas!
In terms of eviction policy in the row cache, does a write operation
invalidates only the row(s) which are going be modified or the whole partition?
In older version of Cassandra, I believe the
Dear All,
The default row_cache_save_period=0,looks Row Cache does not work in this
situation?
but we can still see the row cache hit.
Row Cache : entries 202787, size 100 MB, capacity 100 MB, 3095293
hits, 6796801 requests, 0.455 recent hit rate, 0 save period in seconds
I found a lot of documentation about the read path for key and row caches, but
I haven't found anything in regard to the write path. My app has the need to
record a large quantity of very short lived temporal data that will expire
within seconds and only have a small percentage of the rows acce
Heya!
I’ve been observing some strange and worrying behaviour all this week with row
cache hits taking hundreds of milliseconds.
Cassandra 1.2.15, Datastax CQL driver 1.0.4.
EC2 m1.xlarge instances
RF=3, N=4
vnodes in use
key cache: 200M
row cache: 200M
row_cache_provider
Hello All,
Wondering if anyone has tried to modify the row-cache API to use both the
partition key and the clustering keys to convert the row-cache, which is
really a partition cache today, into a true row-cache? This might help with
broader adoption of row-cache for use-cases with large
On Fri, Nov 5, 2010 at 1:41 PM, Jeremy Davis
wrote:
> I saw in the Riptano "Tuning Cassandra" slide deck that the row cache can
> be detrimental if there are a lot of updates to the cached row. Is this
> because the cache is not write through, and every update necessitates
>
ion.
OTOH, you may be talking about continuously evicting rows from the cache
(because the cache is too small )... Assuming that is not the case, should I
turn on Row Cache?
In short, it seems like the general advice is unless you have a set of
nearly static rows, AND they all fit in the cache, then r
e the optimal situation.
>
> OTOH, you may be talking about continuously evicting rows from the cache
> (because the cache is too small )... Assuming that is not the case, should I
> turn on Row Cache?
> In short, it seems like the general advice is unless you have a set of near
l )... Assuming that is not the case, should I
> turn on Row Cache?
>
This is a problem too. You can't make the cache huge because of GC
pressure, and if your read pattern is largely random then the eviction will
cause GC pressure.
> In short, it seems like the general advice is
Does anyone have any comments/suggestions for me regarding this? Thanks
I am trying to understand some strange behavior of cassandra row cache. We
> have a 6-node Cassandra cluster in a single data center on 2 racks, and the
> neighboring nodes on the ring are from alternative racks.
> Row Cache: size 1072651974 (bytes), capacity 1073741824 (bytes), 0
> hits, 2576 requests, NaN recent hit rate, 0 save period in seconds
So the cache is pretty much full, there is only 1 MB free.
There were 2,576 read requests that tried to get a row from the cache. Zero of
tho
Hi Aaron,
Thank you,and your explanation makes sense. At the time, I thought having
1GB of row cache on each node was plenty enough, because there was an
aggregated 6GB cache, but you are right, with each row in 10's of MBs, some
of the nodes can go into a constant load and evict cycle and
> Does this mean we should not enable row caches until we are absolutely sure
> about what's hot (I think there is a reason why row caches are disabled by
> default) ?
Yes and Yes.
Row cache takes memory and CPU, unless you know you are getting a benefit from
it leave it off.
Got it. Thanks again, Aaron.
-- Y.
On Tue, Dec 4, 2012 at 3:07 PM, aaron morton wrote:
> Does this mean we should not enable row caches until we are absolutely
> sure about what's hot (I think there is a reason why row caches are
> disabled by default) ?
>
> Yes and Ye
issues and we could trace the timeouts to parnew gc collections which
were quite frequent. You might just want to take a look there too.
On Sat, Dec 29, 2012 at 4:44 PM, André Cruz wrote:
> Hello.
>
> I recently was having some timeout issues while updating counters and
> turned
On 29/12/2012, at 16:59, rohit bhatia wrote:
> Reads during a write still occur during a counter increment with CL ONE, but
> that latency is not counted in the request latency for the write. Your local
> node write latency of 45 microseconds is pretty quick. what is your timeout
> and the wri
i assume u mean 8 seconds and not 8ms..
thats pretty huge to be caused by gc. Is there lot of load on your servers?
You might also need to check for memory contention
Regarding GC, since its parnew all u can really do is increase heap and
young gen size, or modify tenuring rate. But that can't be
Can you post gc settings? Also check logs and see what it says
Also post how many writes and reads along with avg row size
Sent from my iPhone
On Dec 29, 2012, at 12:28 PM, rohit bhatia wrote:
> i assume u mean 8 seconds and not 8ms..
> thats pretty huge to be caused by gc. Is there lot of lo
On Dec 29, 2012, at 8:53 PM, Mohit Anchlia wrote:
> Can you post gc settings? Also check logs and see what it says
These are the relevant jam settings:
-home /usr/lib/jvm/j2re1.6-oracle/bin/../
-ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar
-XX:+UseThreadPriorities
-XX:ThreadPriorityP
Does anyone see anything wrong in these settings? Anything to account for a 8s
timeout during a counter increment?
Thanks,
André
On 31/12/2012, at 14:35, André Cruz wrote:
> On Dec 29, 2012, at 8:53 PM, Mohit Anchlia wrote:
>
>> Can you post gc settings? Also check logs and see what it says
The first thing I look for with timeouts like that is a flush storm causing
blocking in the write path (due to the internal "switch lock").
Take a look in the logs, for a number of messages such as "enqueuing CF…" and
"writing cf..". Look for a pattern of enqueuing cf messages that occur
immed
Any clue on this ?
Row cache well configured could avoid us a lot of disk read, and IO
is definitely our bottleneck... If someone could explain why the row cache
has so much impact on my JVM and how to avoid it, it would be appreciated
:).
2013/3/8 Alain RODRIGUEZ
> Hi,
>
> We have s
I can add that I have JNA corectly loaded, from the logs: "JNA mlockall
successful"
2013/3/11 Alain RODRIGUEZ
> Any clue on this ?
>
> Row cache well configured could avoid us a lot of disk read, and IO
> is definitely our bottleneck... If someone could explain why the r
I have the same problem!
2013/3/11 Alain RODRIGUEZ
> I can add that I have JNA corectly loaded, from the logs: "JNA mlockall
> successful"
>
>
> 2013/3/11 Alain RODRIGUEZ
>
>> Any clue on this ?
>>
>> Row cache well configured could avoid us a
What version are you using?
Sounds like you have configured it correctly. Did you restart the node after
changing the row_cache_size_in_mb ?
The changes in GC activity are not huge and may not be due to cache activity.
Have they continued after you enabled the row cache?
What is the output
udge, and just happened on the node in which I had enabled row
cache. I just enabled it on .164 node from 10:45 to 10:48 and the heap size
doubled from 3.5GB to 7GB (out of 8, which induced memory pressure). About
GC, all the collections increased a lot compare to the other nodes with row
caching disab
> No, I didn't. I used the nodetool setcachecapacity and didn't restart the
> node.
ok.
> I find them hudge, and just happened on the node in which I had enabled row
> cache. I just enabled it on .164 node from 10:45 to 10:48 and the heap size
> doubled from 3.5G
. I used the nodetool setcachecapacity and didn't restart
> the node.
> ok.
>
> > I find them hudge, and just happened on the node in which I had enabled
> row cache. I just enabled it on .164 node from 10:45 to 10:48 and the heap
> size doubled from 3.5GB to 7GB (out of 8,
I saw in the mailing list
> ?
>
>
> 2013/3/14 aaron morton
> > No, I didn't. I used the nodetool setcachecapacity and didn't restart the
> > node.
> ok.
>
> > I find them hudge, and just happened on the node in which I had enabled row
> >
On Thu, Jun 30, 2011 at 12:44 PM, Daniel Doubleday wrote:
> Hi all - or rather devs
>
> we have been working on an alternative implementation to the existing row
> cache(s)
>
> We have 2 main goals:
>
> - Decrease memory -> get more rows in the cache without suf
Fri, Jul 1, 2011 at 2:25 AM, Edward Capriolo wrote:
>
>
> On Thu, Jun 30, 2011 at 12:44 PM, Daniel Doubleday <
> daniel.double...@gmx.net> wrote:
>
>> Hi all - or rather devs
>>
>> we have been working on an alternative implementation to the existing row
>
I'm interested. :)
On Thu, Jun 30, 2011 at 11:44 AM, Daniel Doubleday
wrote:
> Hi all - or rather devs
>
> we have been working on an alternative implementation to the existing row
> cache(s)
>
> We have 2 main goals:
>
> - Decrease memory -> get more rows in th
Not sure how feasible it is or if it's planned. But it would probably require
that the nodes are able so share the state of their row cache so as to know
which parts to warm. Otherwise it sounds like you're assuming the node can hold
the entire data set in memory.
If you kn
On Sun, Aug 8, 2010 at 5:24 AM, aaron morton wrote:
> Not sure how feasible it is or if it's planned. But it would probably
> require that the nodes are able so share the state of their row cache so as
> to know which parts to warm. Otherwise it sounds like you're assuming the
ve a couple of use cases with wide rows with a small portion of hot data
> in them.
>
> Example:
>
> Chatlog:
>
> {
> $userid1-$userid2: [{timestamp: message}, {timestamp: message} ...]
> }
>
> People tend to check only the most recent pages. So while the c
cassandra 1.1.1 ships with concurrentlinkedhashmap-lru-1.3.jar
row_cache_size_in_mb starts life as an int but the byte size is stored as a
long
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/CacheService.java#L143
Cheers
-
Aaron Morton
Fre
I was using the datastax build. Do they also have a 1.1 build?
On Mon, Jun 18, 2012 at 9:05 AM, aaron morton wrote:
> cassandra 1.1.1 ships with concurrentlinkedhashmap-lru-1.3.jar
>
> row_cache_size_in_mb starts life as an int but the byte size is stored as a
> long
> https://github.com/apache/c
sorry I meant 1.1.1 build
On Mon, Jun 25, 2012 at 10:40 AM, Noble Paul നോബിള് नोब्ळ्
wrote:
> I was using the datastax build. Do they also have a 1.1 build?
>
> On Mon, Jun 18, 2012 at 9:05 AM, aaron morton wrote:
>> cassandra 1.1.1 ships with concurrentlinkedhashmap-lru-1.3.jar
>>
>> row_cach
9/12 = .75
It's a rate, not a percentage.
On Sat, Aug 31, 2013 at 2:21 PM, Sávio Teles
wrote:
> I'm running one Cassandra node -version 1.2.6- and I *enabled* the *row
> cache* with *1GB*.
> But looking the Cassandra metrics on JConsole, *Row Cache Reques
gt;>I'm running one Cassandra node -version 1.2.6- and I *enabled* the *row
>> cache* with *1GB*.
>>
>> But looking the Cassandra metrics on JConsole, *Row Cache Requests* are
>> very *low* after a high number of queries (about 12 requests).
>>
>> RowC
y = '99PERCENTILE';
>
> Have set up the C* nodes with
> row_cache_size_in_mb: 1024
> row_cache_save_period: 14400
>
> and I am making this query
> select svc_pt_id, meas_type_id, read_time, value FROM
> cts
ssertTrace("Preparing statement").then("Row cache
hit").then("Request complete");
This would be a pretty awesome way to verify things without mock/mockito.
On Mon, Oct 3, 2016 at 2:35 PM, Abhinav Solan
wrote:
> Hi, can anyone please help me with this
>
> Than
Which version of Cassandra are you running (I can tell it’s newer than 2.1, but
exact version would be useful)?
From: Abhinav Solan
Reply-To: "user@cassandra.apache.org"
Date: Monday, October 3, 2016 at 11:35 AM
To: "user@cassandra.apache.org"
Subject: Re: Row cach
version would be useful)?
>
>
>
> *From: *Abhinav Solan
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Monday, October 3, 2016 at 11:35 AM
> *To: *"user@cassandra.apache.org"
> *Subject: *Re: Row cache not working
>
>
>
> Hi, can anyone
Seems like it’s probably worth opening a jira issue to track it (either to
confirm it’s a bug, or to be able to better explain if/that it’s working as
intended – the row cache is probably missing because trace indicates the read
isn’t cacheable, but I suspect it should be cacheable
xplain if/that it’s working as
> intended – the row cache is probably missing because trace indicates the
> read isn’t cacheable, but I suspect it should be cacheable).
>
>
>
>
>
>
> Do note, though, that setting rows_per_partition to ALL can be very very
> very dangero
If I remember correctly row cache caches only N rows from the beginning of the
partition. N being some configurable number.
See this link which is suggesting that:
http://www.datastax.com/dev/blog/row-caching-in-cassandra-2-1
Br,
Hannu
> On 4 Oct 2016, at 1.32, Edward Capriolo wr
t;
Subject: Re: Row cache not working
If I remember correctly row cache caches only N rows from the beginning of the
partition. N being some configurable number.
See this link which is suggesting that:
http://www.datastax.com/dev/blog/row-caching-in-cassandra-2-1
Br,
Hannu
On
And we are using C* 2.1.18.
-- Original --
From: "";<2535...@qq.com>;
Date: Wed, Sep 20, 2017 11:27 AM
To: "user";
Subject: Row Cache hit issue
Dear All,
The default row_cache_save_period=0,looks Row Cache does n
27 PM, Peng Xiao <2535...@qq.com> wrote:
> And we are using C* 2.1.18.
>
>
> -- Original --
> *From: * "我自己的邮箱";<2535...@qq.com>;
> *Date: * Wed, Sep 20, 2017 11:27 AM
> *To: * "user";
> *Subject: * Row Cache hi
:06
To: cassandra
Subject: Re: Row Cache hit issue
Hi Peng,
C* periodically saves cache to disk, to solve cold start problem. If
row_cache_save_period=0, it means C* does not save cache to disk. But the cache
is still working, if it's enabled in table schema, just the cache will be empty
Thanks All.
-- --
??: "Steinmaurer, Thomas";;
: 2017??9??20??(??) 1:38
??: "user@cassandra.apache.org";
: RE: Row Cache hit issue
Hi,
additionally, with saved (key) caches, we had some sort of
Hello,
I am trying to verify and understand fully the functionality of row cache in
Cassandra.
I have been using mainly two different sources for information:
https://github.com/apache/cassandra/blob/0db88242c66d3a7193a9ad836f9a515b3ac7f9fa/src/java/org/apache/cassandra/db
Is there a JMX property somewhere that I could monitor to see how old the
oldest row cache item is?
I want to see how much churn there is.
Thanks in advance,
John...
On Mon, Mar 31, 2014 at 9:37 AM, Wayne Schroeder <
wschroe...@pinsightmedia.com> wrote:
> I found a lot of documentation about the read path for key and row caches,
> but I haven't found anything in regard to the write path. My app has the
> need to record a large quantity of very short lived tem
Perhaps I should clarify my question. Is this possible / how might I
accomplish this with cassandra?
Wayne
On Mar 31, 2014, at 12:58 PM, Robert Coli
mailto:rc...@eventbrite.com>>
wrote:
On Mon, Mar 31, 2014 at 9:37 AM, Wayne Schroeder
mailto:wschroe...@pinsightmedia.com>> wrote:
I found a
On Mar 31, 2014 12:38 PM, "Wayne Schroeder"
wrote:
> I found a lot of documentation about the read path for key and row caches,
> but I haven't found anything in regard to the write path. My app has the
> need to record a large quantity of very short lived temporal data that will
> expire within
on't see any evidence that writes end up in the
> cache--that it takes at least one read to get it into the cache. I also
> realize that, assuming I don't cause SSTable writes due to sheer quantity,
> that the data would be in memory anyway.
>
> Has anyone done anything simi
caching and consistency are "hard" - a clustering / row may not
change, but it may be deleted by a range delete that deletes it and many
other clusterings / rows, which makes maintaining correctness of an
individual row cache not that different from maintenance of the data around
it, which
1 - 100 of 183 matches
Mail list logo