2.1.4 is getting pretty old. There’s a DTCS deletion tweak in 2.1.5 (
https://issues.apache.org/jira/browse/CASSANDRA-8359 ) that may help you.
2.1.5 and 2.1.6 have some memory leak issues in DTCS, so go to 2.1.7 or newer
(probably 2.1.9 unless you have a compelling reason not to go to 2.1.9)
Thank you, Mark.
On Tue, Sep 15, 2015 at 5:44 AM, Mark Greene wrote:
> Hey Rock,
>
> I've seen this occur as well. I've come to learn that in some cases, like
> a network blip, the join can fail. There is usually something in the log to
> the effect of "Stream failed"
>
> When I encounter this i
ps : that's the code in java drive , in MetaData.TokenMap.build:
for (KeyspaceMetadata keyspace : keyspaces)
{
ReplicationStrategy strategy = keyspace.replicationStrategy();
Map> ksTokens = (strategy == null)
? makeNonReplicatedMap(tokenToPrimary)
: strategy.computeTokenToR
cassandra: 2.1.7
java driver: datastax java driver 2.1.6
Here is the problem:
My application uses 2000+ keyspaces, and will dynamically create
keyspaces and tables. And then in java client, the
Metadata.tokenMap.tokenToHost would use about 1g memory. so this will cause
a lot of full gc.
As