’ll compact
> away most of the other data in those old sstables (but not the partition
> that’s been manually updated)
>
> Also table level TTLs help catch this type of manual manipulation -
> consider adding it if appropriate.
>
> --
> Jeff Jirsa
>
>
> On May 6, 2
>
> On May 3, 2019, at 7:57 PM, Nick Hatfield
> wrote:
>
> Hi Mike,
>
>
>
> If you will, share your compaction settings. More than likely, your issue
> is from 1 of 2 reasons:
> 1. You have read repair chance set to anything other than 0
>
> 2. You’re running r
Thx for the help Paul - there are definitely some details here I still
don't fully understand, but this helped me resolve the problem and know
what to look for in the future :)
On Fri, May 3, 2019 at 12:44 PM Paul Chandler wrote:
> Hi Mike,
>
> For TWCS the sstable can only be deleted
it is (ie only one CF even
though I have a few others that share a very similar schema, and only some
nodes) seems like it will help me prevent it.
On Thu, May 2, 2019 at 1:00 PM Paul Chandler wrote:
> Hi Mike,
>
> It sounds like that record may have been deleted, if that is the case then
&
_info" : {
"local_delete_time" : "2019-01-22T17:59:35Z" }
}
]
}
]
}
```
As expected, almost all of the data except this one suspicious partition
has a ttl and is already expired. But if a partition isn't expired and I
see it in the sst
ther ideas? Why does the row show in `sstabledump`
but not when I query for it?
I appreciate any help or suggestions!
- Mike
: {
coreConnectionsPerHost: {
[distance.local]: 2,
[distance.remote]: 0
}
}
```
Any suggestions?
- Mike
?
>
> On Tue, Mar 27, 2018 at 2:24 PM, Mike Torra <mto...@salesforce.com> wrote:
>
>> Hi There -
>>
>> I have noticed an issue where I consistently see high p999 read latency
>> on a node for a few hours after replacing the node. Before replacing the
consistent to be actually caused by
that. The problem is consistent across multiple replacements, and multiple
EC2 regions.
I appreciate any suggestions!
- Mike
Then could it be that calling `nodetool drain` after calling `nodetool
disablegossip` is what causes the problem?
On Mon, Feb 12, 2018 at 6:12 PM, kurt greaves wrote:
>
> Actually, it's not really clear to me why disablebinary and thrift are
> necessary prior to drain,
s that I moved
`nodetool disablegossip` to after `nodetool drain`. This is pretty
anecdotal, but is there any explanation for why this might happen? I'll be
monitoring my cluster closely to see if this change does indeed fix the
problem.
On Mon, Feb 12, 2018 at 9:33 AM, Mike Torra <mto...@s
Any other ideas? If I simply stop the node, there is no latency problem,
but once I start the node the problem appears. This happens consistently
for all nodes in the cluster
On Wed, Feb 7, 2018 at 11:36 AM, Mike Torra <mto...@salesforce.com> wrote:
> No, I am not
>
> On Wed, Fe
No, I am not
On Wed, Feb 7, 2018 at 11:35 AM, Jeff Jirsa <jji...@gmail.com> wrote:
> Are you using internode ssl?
>
>
> --
> Jeff Jirsa
>
>
> On Feb 7, 2018, at 8:24 AM, Mike Torra <mto...@salesforce.com> wrote:
>
> Thanks for the feedback guys. That e
drain do
> the right thing), but in this case, your data model looks like the biggest
> culprit (unless it's an incomplete recreation).
>
> - Jeff
>
>
> On Tue, Feb 6, 2018 at 10:58 AM, Mike Torra <mto...@salesforce.com> wrote:
>
>> Hi -
>>
>> I
racefully restart the cluster? It could be something to do with the nodejs
driver, but I can't find anything there to try.
I appreciate any suggestions or advice.
- Mike
the child table. But I
believe I'll get the same problem because Cassandra simply don't sort as an
RDBMS. So here must be an idea behind the philosophy of Cassandra.
Can anyone help me out?
Best regards
Mike Wenzel
(1)https://www.datastax.com/dev/blog/we-shall-have-order
I'm trying to use sstableloader to bulk load some data to my 4 DC cluster,
and I can't quite get it to work. Here is how I'm trying to run it:
sstableloader -d 127.0.0.1 -i {csv list of private ips of nodes in cluster}
myks/mttest
At first this seems to work, with a steady stream of logging
to tell when/if the local node has successfully updated the
compaction strategy? Looking at the sstable files, it seems like they are
still based on STCS but I don't know how to be sure.
Appreciate any tips or suggestions!
On Mon, Mar 13, 2017 at 5:30 PM, Mike Torra <mto...@salesforce.com>
I'm trying to change compaction strategy one node at a time. I'm using
jmxterm like this:
`echo 'set -b
org.apache.cassandra.db:type=ColumnFamilies,keyspace=my_ks,columnfamily=my_cf
CompactionParametersJson
I can't say that I have tried that while the issue is going on, but I have
done such rolling restarts for sure, and the timeouts still occur every
day. What would a rolling restart do to fix the issue?
In fact, as I write this, I am restarting each node one by one in the
eu-west-1 datacenter, and
here:
https://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archDataDistributeFailDetect.html.
This does not seem to have changed anything on the nodes that I've changed
it on.
I appreciate any suggestions on what else to try in order to track down
these timeouts.
- Mike
.apache.org>"
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Date: Saturday, January 14, 2017 at 1:25 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Su
We currently use redis to store sorted sets that we increment many, many times
more than we read. For example, only about 5% of these sets are ever read. We
are getting to the point where redis is becoming difficult to scale (currently
at >20 nodes).
We've started using cassandra for other
Just bumping - has anyone seen this before?
http://stackoverflow.com/questions/41446352/cassandra-3-9-jvm-metrics-have-bad-name
From: Mike Torra <mto...@demandware.com<mailto:mto...@demandware.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra
extra '.'
character that causes them to not show up in graphite.
Am I missing something silly here? Appreciate any help or suggestions.
- Mike
.apache.org>>
Subject: Re: failing bootstraps with OOM
On Wed, Nov 2, 2016 at 3:35 PM, Mike Torra
<mto...@demandware.com<mailto:mto...@demandware.com>> wrote:
>
> Hi All -
>
> I am trying to bootstrap a replacement node in a cluster, but it consistently
> fails
appreciate any suggestions on what else I can try to track down the cause
of these OOM exceptions.
- Mike
Garo,
No, we didn't notice any change in system load, just the expected spike in
packet counts.
Mike
On Wed, Jul 20, 2016 at 3:49 PM, Juho Mäkinen <juho.maki...@gmail.com>
wrote:
> Just to pick this up: Did you see any system load spikes? I'm tracing a
> problem on 2.2.7 where my
by the initial timeout
spike which leads to dropping all / high-percentage of all subsequent
traffic.
We are planning to continue production use with msg coaleasing disabled for
now and may run tests in our staging environments to identify where the
coalescing is breaking this.
Mike
On Tue, Jul 5
One thing to add, if we do a rolling restart of the ring the timeouts
disappear entirely for several hours and performance returns to normal.
It's as if something is leaking over time, but we haven't seen any
noticeable change in heap.
On Thu, Jun 23, 2016 at 10:38 AM, Mike Heffner &l
n what to look for? Can we increase thread count/pool sizes
for the messaging service?
Thanks,
Mike
--
Mike Heffner <m...@librato.com>
Librato, Inc.
a look at the records in system.compaction:
select * from system.compaction_history;
Regards,
Mike Yeap
On Tue, May 31, 2016 at 5:21 PM, Paul Dunkler <p...@uplex.de> wrote:
> And - as an addition:
>
> Shoudln't that be documented that even snapshot files can change?
>
> I
memtable_offheap_space_in_mb
Regards,
Mike Yeap
On Sun, May 29, 2016 at 6:18 PM, Bhuvan Rawal <bhu1ra...@gmail.com> wrote:
> Hi,
>
> We are running a 6 Node cluster in 2 DC on DSC 3.0.3, with 3 Node each.
> One of the node was showing UNREACHABLE on other nodes in nodetool
Hi Paolo,
a) was there any large insertion done?
b) are the a lot of files in the saved_caches directory?
c) would you consider to increase the HEAP_NEWSIZE to, say, 1200M?
Regards,
Mike Yeap
On Fri, May 27, 2016 at 12:39 AM, Paolo Crosato <
paolo.cros...@targaubiest.com> wrote:
> H
Hi George, are you using NetworkTopologyStrategy as the replication
strategy for your keyspace? If yes, can you check the
cassandra-rackdc.properties of this new node?
https://issues.apache.org/jira/browse/CASSANDRA-8279
Regards,
Mike Yeap
On Wed, May 25, 2016 at 2:31 PM, George Sigletos
the default
for Cassandra 2.2 and later.
Regards,
Mike Yeap
On Wed, May 25, 2016 at 8:01 AM, Bryan Cheng <br...@blockcypher.com> wrote:
> Hi Luke,
>
> I've never found nodetool status' load to be useful beyond a general
> indicator.
>
> You should expect some sma
where you need to manage that
all yourself (painfully)
--
--mike
I didn't use the -full option of the "nodetool rebuild".
Thanks!
Regards,
Mike Yeap
On Thu, May 19, 2016 at 4:03 PM, Ben Slater <ben.sla...@instaclustr.com>
wrote:
> Use nodetool listsnapshots to check if you have a snapshot - in default
> configuration, Cassandra takes sn
Hi all, I would like to know, is there any way to rebuild a particular
column family when all the SSTables files for this column family are
missing?? Say we do not have any backup of it.
Thank you.
Regards,
Mike Yeap
Emils,
We believe we've tracked it down to the following issue:
https://issues.apache.org/jira/browse/CASSANDRA-11302, introduced in 2.1.5.
We are running a build of 2.2.5 with that patch and so far have not seen
any more timeouts.
Mike
On Fri, Mar 4, 2016 at 3:14 AM, Emīls Šolmanis
Emils,
I realize this may be a big downgrade, but are you timeouts reproducible
under Cassandra 2.1.4?
Mike
On Thu, Feb 25, 2016 at 10:34 AM, Emīls Šolmanis <emils.solma...@gmail.com>
wrote:
> Having had a read through the archives, I missed this at first, but this
> seems to be *e
:
memtable_allocation_type: offheap_objects
memtable_flush_writers: 8
Cheers,
Mike
On Fri, Feb 19, 2016 at 1:46 PM, Nate McCall <n...@thelastpickle.com> wrote:
> The biggest change which *might* explain your behavior has to do with the
> changes in memtable flushing between 2.0 and
, batching (via Thrift
mostly) to 5 tables, between 6-1500 rows per batch.
Mike
On Thu, Feb 18, 2016 at 12:22 PM, Anuj Wadehra <anujw_2...@yahoo.co.in>
wrote:
> Whats the GC overhead? Can you your share your GC collector and settings ?
>
>
> Whats your query pattern? Do you use
that we've tracked it to something between
2.0.x and 2.1.x, so we are focusing on narrowing which point release it was
introduced in.
Cheers,
Mike
On Thu, Feb 18, 2016 at 3:33 AM, Alain RODRIGUEZ <arodr...@gmail.com> wrote:
> Hi Mike,
>
> What about the output of tpstats ? I i
on that earlier.
Thanks,
Mike
On Wed, Feb 10, 2016 at 2:51 PM, Mike Heffner <m...@librato.com> wrote:
> Hi all,
>
> We've recently embarked on a project to update our Cassandra
> infrastructure running on EC2. We are long time users of 2.0.x and are
> testing out a move to version
Jaydeep,
No, we don't use any light weight transactions.
Mike
On Wed, Feb 17, 2016 at 6:44 PM, Jaydeep Chovatia <
chovatia.jayd...@gmail.com> wrote:
> Are you guys using light weight transactions in your write path?
>
> On Thu, Feb 11, 2016 at 12:36 AM, Fabrice Facorat &
g obvious. Happy to provide any
more information that may help.
We are pretty much at the point of sprinkling debug around the code to
track down what could be blocking.
Thanks,
Mike
--
Mike Heffner <m...@librato.com>
Librato, Inc.
Paulo,
Thanks for the suggestion, we ran some tests against CMS and saw the same
timeouts. On that note though, we are going to try doubling the instance
sizes and testing with double the heap (even though current usage is low).
Mike
On Wed, Feb 10, 2016 at 3:40 PM, Paulo Motta <pauloric
Jeff,
We have both commitlog and data on a 4TB EBS with 10k IOPS.
Mike
On Wed, Feb 10, 2016 at 5:28 PM, Jeff Jirsa <jeff.ji...@crowdstrike.com>
wrote:
> What disk size are you using?
>
>
>
> From: Mike Heffner
> Reply-To: "user@cassandra.apache.org"
> Date:
we are not using DTCS, but it
matches since the upgrade appeared to only drop fully expired sstables.
Mike
On Sat, Jul 18, 2015 at 3:40 PM, Nate McCall n...@thelastpickle.com wrote:
Perhaps https://issues.apache.org/jira/browse/CASSANDRA-9592 got
compactions moving forward for you? This would
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
Thanks,
Mike
--
Mike Heffner m...@librato.com
Librato, Inc.
that means in practice. Will the counters be
99.99% accurate? How often will they be over or under counted?
Thanks, Mike.
are idleing), so I assume I'm CPU bound on node side. But why ? What the
node is doing ? Why does it take so long time ?
--
Mike Neir
Liquid Web, Inc.
Infrastructure Administrator
=Cache,scope=KeyCache,name=Capacity
The Type of Attribute (Value) is java.lang.Object
is it possible to implement the datatype of gauge as numeric types
instead of object, or other way around for example using metric
reporter...etc?
Thanks a lot for any suggestion!
Best Regard!
Mike
is to stick with
Thrift?
Mike
On Thu, Jul 17, 2014 at 8:27 PM, Tyler Hobbs ty...@datastax.com wrote:
For this type of query, you really want the tuple notation introduced in
2.0.6 (https://issues.apache.org/jira/browse/CASSANDRA-4851):
SELECT * FROM CF WHERE key='X' AND (column1, column2
AND column34
AND column1=2;
but that is rejected with:
Bad Request: PRIMARY KEY part column2 cannot be restricted (preceding part
column1 is either not restricted or by a non-EQ relation)
Mike
On Thu, Jul 17, 2014 at 6:37 PM, Michael Dykman mdyk...@gmail.com wrote:
The last term in this query
| 40 | 52 | 91 | 45
Mike
that simply
restarting will inevitably hit this problem again.
Cheers,
Mike
--
Mike Heffner m...@librato.com
Librato, Inc.
I am investigating Java Out of memory heap errors. So I created an .hprof
file and loaded it into Eclipse Memory Analyzer Tool which gave some
Problem Suspects.
First one looks like:
One instance of org.apache.cassandra.db.ColumnFamilyStore loaded by
sun.misc.Launcher$AppClassLoader @
Thanks for the response Rob,
And yes, the relevel helped the bloom filter issue quite a bit, although it
took a couple of days for the relevel to complete on a single node (so if
anyone tried this, be prepared)
-Mike
Sent from my iPhone
On Sep 23, 2013, at 6:34 PM, Robert Coli rc
:
* fix 1.0.x node join to mixed version cluster, other nodes = 1.1
(CASSANDRA-4195)
-Jeremiah
--
Mike Neir
Liquid Web, Inc.
Infrastructure Administrator
to just hop back into
the cluster without error and without transitioning through the Joining state.
--
Mike Neir
Liquid Web, Inc.
Infrastructure Administrator
/2013 12:15 PM, Robert Coli wrote:
On Fri, Aug 30, 2013 at 8:57 AM, Mike Neir m...@liquidweb.com
mailto:m...@liquidweb.com wrote:
I'm faced with the need to update a 36 node cluster with roughly 25T of data
on disk to a version of cassandra in the 1.2.x series. While it seems
Is there anything that you can link that describes the pitfalls you mention? I'd
like a bit more information. Just for clarity's sake, are you recommending 1.0.9
- 1.0.12 - 1.1.12 - 1.2.x? Or would 1.0.9 - 1.1.12 - 1.2.x suffice?
Regarding the placement strategy mentioned in a different post,
-tabpanel#comment-13748998
Cheers,
Mike
On Sun, Aug 25, 2013 at 4:06 AM, Janne Jalkanen janne.jalka...@ecyrd.comwrote:
This on cass 1.2.8
Ring state before decommission
-- Address Load Owns Host ID
TokenRack
UN 10.0.0.1 38.82 GB 33.3
? For both operations,
what it is time-consuming the data streaming from (or to) other node, right?
Thanks in advance.
Att.
*Rodrigo Felix de Almeida*
LSBD - Universidade Federal do Ceará
Project Manager
MBA, CSM, CSPO, SCJP
--
Mike Heffner m...@librato.com
Librato, Inc.
. We're using a slightly
modified version [1]. We currently backup every sst as soon as they hit
disk (tablesnap's inotify), but we're considering moving to a periodic
snapshot approach as the sst churn after going from 24 nodes - 6 nodes is
quite high.
Mike
[1]: https://github.com/librato/tablesnap
Aiman,
I believe that is one of the cases we added a check for:
https://github.com/librato/tablesnap/blob/master/tablesnap#L203-L207
Mike
On Thu, Jul 11, 2013 at 1:54 PM, Aiman Parvaiz ai...@grapheffect.comwrote:
Thanks for the info Mike, we ran in to a race condition which was killing
I'm curious because we are experimenting with a very similar configuration,
what basis did you use for expanding the index_interval to that value? Do
you have before and after numbers or was it simply reduction of the heap
pressure warnings that you looked for?
thanks,
Mike
On Tue, Jul 9, 2013
On Mon, Jul 1, 2013 at 10:06 PM, Mike Heffner m...@librato.com wrote:
The only changes we've made to the config (aside from dirs/hosts) are:
Forgot to include we've changed this as well:
-partitioner: org.apache.cassandra.dht.Murmur3Partitioner
+partitioner
from
multiple replicas across the az/rack configuration?
Mike
On Tue, Jul 2, 2013 at 1:53 PM, sankalp kohli kohlisank...@gmail.comwrote:
This was a problem pre vnodes. I had several JIRA for that but some of
them were voted down saying the performance will improve with vnodes.
The main problem
to indicate that the sending node is
limiting our streaming rate.
Mike
On Tue, Jul 2, 2013 at 3:00 PM, Mike Heffner m...@librato.com wrote:
Sankalp,
Parallel sstableloader streaming would definitely be valuable.
However, this ring is currently using vnodes and I was surprised to see
suggestions for what to adjust to see better streaming performance? 5%
of what a single rsync can do seems somewhat limited.
Thanks,
Mike
--
Mike Heffner m...@librato.com
Librato, Inc.
Zealand
@aaronmorton
http://www.thelastpickle.com
On 28/02/2013, at 3:21 PM, Mike Koh defmike...@gmail.com wrote:
It has been suggested to me that we could save a fair amount of time and money
by taking a snapshot of only 1 replica (so every third node for most column
families). Assuming that we
It has been suggested to me that we could save a fair amount of time and
money by taking a snapshot of only 1 replica (so every third node for
most column families). Assuming that we are okay with not having the
absolute latest data, does this have any possibility of working? I feel
like it
within an sstable.)
We recently upgraded from 1.1.2 to 1.1.9.
Does anyone know if an offline scrub is recommended to be performed when
switching from STCS-LCS after upgrading from 1.1.2?
Any insight would be appreciated,
Thanks,
-Mike
On 2/17/2013 8:57 PM, Wei Zhu wrote:
We doubled
Hello Wei,
First thanks for this response.
Out of curiosity, what SSTable size did you choose for your usecase, and
what made you decide on that number?
Thanks,
-Mike
On 2/14/2013 3:51 PM, Wei Zhu wrote:
I haven't tried to switch compaction strategy. We started with LCS.
For us, after
Another piece of information that would be useful is advice on how to
properly set the SSTable size for your usecase. I understand the
default is 5MB, a lot of examples show the use of 10MB, and I've seen
cases where people have set is as high as 200MB.
Any information is appreciated,
-Mike
cause updates to be mistakenly dropped for being old.
Also, make sure you are running with a gc_grace period that is high
enough. The default is 10 days.
Hope this helps,
-Mike
On 2/15/2013 1:13 PM, Víctor Hugo Oliveira Molinar wrote:
hello everyone!
I have a column family filled with event
.
Given we use an RF of 3, and LOCAL_QUORUM consistency for everything,
and we are not seeing errors, something seems to be working correctly.
Any idea what is going on above? Should I be alarmed?
-Mike
on what else needs to be done after the schema change.
I did these tests with Cassandra 1.1.9.
Thanks,
-Mike
?
Thanks,
-Mike
On 2/10/2013 3:27 PM, aaron morton wrote:
I would do #1.
You can play with nodetool setcompactionthroughput to speed things up,
but beware nothing comes for free.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http
the restore, a nodetool
repair was run. However, repair was going to run into some heavy
activity for our application, and we canceled that validation compaction
(2 of the 3 anti-entropy sessions had completed). The spin appears to
have started at the start of the second session.
Any hints?
-Mike
upgrade of the sstables over a number of days.
2) Upgrade one node at a time, running the clustered in a mixed
1.1.2-1.1.9 configuration for a number of days.
I would prefer #1, as with #2, streaming will not work until all the
nodes are upgraded.
I appreciate your thoughts,
-Mike
On 1/16
mark in for timestamp failed as expected and I don't see
a method on the DataStax java driver BoundStatement for setting it.
Thanks in advance.
/Mike Sample
Thanks Sylvain. I should have scanned Jira first. Glad to see it's on the
todo list.
On Wed, Feb 6, 2013 at 12:24 AM, Sylvain Lebresne sylv...@datastax.comwrote:
Not yet: https://issues.apache.org/jira/browse/CASSANDRA-4450
--
Sylvain
On Wed, Feb 6, 2013 at 9:06 AM, Mike Sample
Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 28/01/2013, at 4:24 PM, Mike Sample mike.sam...@gmail.com wrote:
Does the following FAQ entry hold even when the partion key is also
constrained in the query (by token())?
http
had any gotchas recently that I should be aware of before
performing this upgrade?
In order to upgrade, is the only thing that needs to change are the JAR
files? Can everything remain as-is?
Thanks,
-Mike
families (the former
makes sense, I'm just making sure).
-Mike
On 1/16/2013 11:08 AM, Jason Wee wrote:
always check NEWS.txt for instance for cassandra 1.1.3 you need to
run nodetool upgradesstables if your cf has counter.
On Wed, Jan 16, 2013 at 11:58 PM, Mike mthero...@yahoo.com
Does CQL3 support blob/BytesType literals for INSERT, UPDATE etc commands?
I looked at the CQL3 syntax (http://cassandra.apache.org/doc/cql3/CQL.html)
and at the DataStax 1.2 docs.
As for why I'd want such a thing, I just wanted to initialize some test
values for a blob column with cqlsh.
?
This is more related to the current activities of deletion, as opposed
to a major compaction (although the question is applicable to both). As
we delete rows, will our bloomfilters grow?
-Mike
On 1/6/2013 3:49 PM, aaron morton wrote:
When these rows are deleted, tombstones will be created
be fairly
small (about 500,000 skinny rows per node, including replicas).
Any other thoughts on this?
-Mike
On 1/6/2013 3:49 PM, aaron morton wrote:
When these rows are deleted, tombstones will be created and stored in more
recent sstables. Upon compaction of sstables, and after gc_grace_period
operations.
Thanks!
-Mike
history on that? I
couldn't find too much information on it.
Thanks,
-Mike
On 12/16/2012 8:41 PM, aaron morton wrote:
1) Am I reading things correctly?
Yes.
If you do a read/slice by name and more than min compaction level
nodes where read the data is re-written so that the next read uses
I'm using 1.0.12 and I find that large sstables tend to get compacted
infrequently. I've got data that gets deleted or expired frequently. Is it
possible to use scrub to accelerate the clean up of expired/deleted data?
--
Mike Smith
Director Development, MailChannels
.
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 14/12/2012, at 3:01 AM, Mike Smith m...@mailchannels.com wrote:
I'm using 1.0.12 and I find that large sstables tend to get compacted
infrequently. I've got data that gets deleted
load on a
single column family?
Any insights would be appreciated,
-Mike
On 12/4/2012 3:33 PM, aaron morton wrote:
For background, a discussion on estimating working set
http://www.mail-archive.com/user@cassandra.apache.org/msg25762.html .
You can also just look at the size of tenured heap
1 - 100 of 203 matches
Mail list logo