-- Forwarded message --
From: Parth Setya
Date: Fri, May 22, 2015 at 5:14 PM
Subject: INFO LOGS NOT written to System.log (Intermittently)
To: user@cassandra.apache.org
Hi
I have a *3 node *cluster.
Logging Level: *INFO*
We observed that for there is nothing written to the
Hi
I have a *3 node *cluster.
Logging Level: *INFO*
We observed that for there is nothing written to the system.log file(on all
three nodes) for a substantial duration of time(~24 minutes)
*INFO [CompactionExecutor:52531] 2015-05-20 05:16:38,187
CompactionController.java (line 198) Compacti
As per this thread
http://stackoverflow.com/questions/10520110/how-do-i-delete-all-data-in-a-cassandra-column-family
What you can do to physically remove the files is to go to
/var/lib/cassandra/data/keyspace_name and then manually delete the
directory with the name of that column family. Do this
Hi
I am adding a new expiring column to an existing column family in
cassandra. I want this new column to be expired at the same time as all the
other expiring columns in the Column Family.
One way of doing this is to get the ttl of existing expiring Columns in
that CF and set that value in my new
Hello people
SSTable split gives the following error
*Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit
exceede
d at
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.drainB
uffers(ConcurrentLinkedHashMap.java:434)
ebook Linkedin Twitter
>
>
> On Mon, Jan 26, 2015 at 7:40 AM, Parth Setya
> wrote:
>
>> Hi
>>
>>
>> *Setup*
>>
>> *3 Node Cluster*
>> Api-
>> * Hector*CL-
>> * QUORUM*
>> RF-
>> *3*
>> Compaction Strategy-
>&g
Hi
*Setup*
*3 Node Cluster*
Api-
* Hector*CL-
* QUORUM*
RF-
*3*
Compaction Strategy-
*Size Tiered Compaction*
*Use Case*
I have about *320 million rows*(~12 to 15 columns each) worth of data
stored in Cassandra. In order to generate a report containing ALL that
data, I do the following:
1. Run
hi
Could someone please shed some light on which is an efficient way to
retrieve data from cassandra- Using a Range Slice Query(I'm Using Hector)
or filtering using secondary indexes?
best
Parth
Hi
I am attempting to add a cassandra node which has some existing data on it
to an existing clutser. Is this a legit thing to do?
And what will happen if the same data with different timestamps exists on
the node to be added and the existing cluster?
What will happen if auto_bootstrapping propert