please refer following url, if you'd like to know about multi-tenancy with
Cassandra.
http://wiki.apache.org/cassandra/MultiTenant
And, Hector supports multi tenant datamodel on cassandra.
https://github.com/hector-client/hector/wiki/Virtual-Keyspaces
Recently, I have disscussed about multi te
Dear all
I have a Cassandra cluster with 2 nodes.
I was trying to increase the replication factor of keyspace in Cassandra to 2.
I did the following steps:
UPDATE KEYSPACE demo WITH strategy_options = {DC1:2,DC2:2}; on both the nodes
Then I ran the nodetool repair on both the nodes
Then I ran my
Would you want to view data like this "there was a key, which had this column ,
but now it does not have any value as of this time."
Unless you specifically want this information, I believe you should just delete
the column, rather than have an alternate value for NULL or create a composite
col
The version is 1.1.0
Prakrati Agrawal | Developer - Big Data(I&D)| 9731648376 | www.mu-sigma.com
From: Dave Brosius [mailto:dbros...@mebigfatguy.com]
Sent: Monday, June 11, 2012 10:07 AM
To: user@cassandra.apache.org
Subject: Re: Out of memory error
What version of Cassandra?
might be related t
What version of Cassandra?
might be related to https://issues.apache.org/jira/browse/CASSANDRA-4098
On 06/11/2012 12:07 AM, Prakrati Agrawal wrote:
Sorry
I ran list /columnFamilyName/; and it threw this error.
Thanks and Regards
Prakrati
*From:*aaron morton [mailto:aa...@thelastpickle.co
Sorry
I ran list columnFamilyName; and it threw this error.
Thanks and Regards
Prakrati
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Saturday, June 09, 2012 12:18 AM
To: user@cassandra.apache.org
Subject: Re: Out of memory error
When you ask a question please include the query or f
Dear all,
I'm really excited about Cassandra's peer-to-peer architecture and sorted
values.
Currently I'm blocked in trials: I cannot insert longs into 'val' in:
create columnfamily entries (
id varchar,
va varchar,
ts bigint,
val bigint,
PRIMARY KEY (id, va, ts)
);
I g
Hi Aaron,
Thanks for reply. I did some more tests and it looks like the problem is
not in deletes/writes, it rather in reads (I do read before deleting).
It turns out that problem was in another CF which had wide row of 1.2GB
and row cache. Cassandra tries to read this row into cache and beco
I require durability for inserts to my column families so I'm using
batch mode to insert data.
However, I have some column families which I use for less important
data (indexes) which are much more write intensive.
If I could change the commit log setting only for them to periodic
instead of batch,