Hi guys! I'm brand new to Cassandara, and I'm working on a database design.
I don't necessarily know all the advantages/limitations of Cassandra, so I'm
not sure that I'm doing it right...
It seems to me that I can divide my database into two parts:
1. The (mostly) normal data, where every piece
Hi Paul,
Thank you for your answer, about the first question, I wondered if it is
possible to workaround this issue but relaxing some consistency, as I
understand you it should be possible to implement this compareAndSet
operation with the presence of vector clocks, then the client is going to
Hi all,
I made too many requests to cassandra , and then after a while, I can
not connect to it. But I can still connect it from another machine ?
So does it mean cassandra will block client in some situation ?
--
Best Regards
Jeff Zhang
Hi
I am doing some load test with 4 nodes cluster. My client is PHP. I found
some reads/writes were time out no matter how I tuned the parameters. These
time-outs could be caught by client code. My question is: are these
time-outs normal even in production environment? Should they be treated as
do you have read the article
WTF is a SuperColumn? An Intro to the Cassandra Data
Modelhttp://arin.me/blog/wtf-is-a-supercolumn-cassandra-data-model?
link: http://arin.me/blog/wtf-is-a-supercolumn-cassandra-data-model
it is a good article for data model.
On Thu, Apr 22, 2010 at 10:38 AM,
No, as far as I know no one is working on transaction support in Cassandra.
Transactions are orthogonal to the design of Cassandra[1][2], although a
system could be designed incorporating Cassandra and other elements a la
Google's MegaStore[3] to support transactions. Google uses Paxos, one might
You might also consider using a Software Transactional Memory[1] approach. I
haven't personally tried it, but there is a Scala/Java framework named Akka
that provides both STM features and Cassandra support. Should be worth a
look. Here's a nice write-up from someone who has already done some
If digg uses PHP with cassandra, can the library really be that old?
Or they are using their own custom php cassandra client? (probably, but just
making sure).
On Fri, Apr 16, 2010 at 2:13 PM, Jonathan Ellis jbel...@gmail.com wrote:
On Fri, Apr 16, 2010 at 12:50 PM, Lee Parker
On Wed, Apr 21, 2010 at 9:50 AM, Mark Greene green...@gmail.com wrote:
Right it's a similar concept to DB sharding where you spread the write load
around to different DB servers but won't necessarily increase the throughput
of an one DB server but rather collectively.
Except with Cassandra,
Suppose I have a SuperColumn CF where one of the SuperColumns in each
row is being treated as a list (e.g. keys only, values are just
empty). In this list, values will only ever be added; deletion never
occurs. If I have two processes simultaneously add values to this
list (on different nodes,
fyi,
https://issues.apache.org/jira/browse/CASSANDRA-930
https://issues.apache.org/jira/browse/CASSANDRA-982
On Thu, Apr 22, 2010 at 11:11 AM, Mike Malone m...@simplegeo.com wrote:
On Wed, Apr 21, 2010 at 9:50 AM, Mark Greene green...@gmail.com wrote:
Right it's a similar concept to DB
http://wiki.apache.org/cassandra/FAQ#range_ghosts
On Thu, Apr 22, 2010 at 5:29 PM, Carlos Sanchez
carlos.sanc...@riskmetrics.com wrote:
I have a curious question..
I am doing some testing where I insert 500 rows to a super column family and
then delete one row, I make sure the row was indeed
On Thu, Apr 22, 2010 at 1:06 PM, Lucas Di Pentima
lu...@di-pentima.com.ar wrote:
Hi,
I would like to see example code about the batch() method, I searched for it
on Google, but I couldn't find any. Reading the inline comments, this
operation could be useful for example to insert some record
yes, I've tried the patch on
https://issues.apache.org/jira/browse/THRIFT-347, but seems not work for me.
I doubt I am involving another issue with Thrift. If my column value size is
more than 8KB(with thrift php extension enabled), my client has more chances
to get timed out error. I am still
I was getting client timeouts in ColumnFamilyRecordReader.maybeInit() when
MapReducing. So I've reduced the Range Batch Size to 256 (from 4096) and
this seems to have fixed my problem, although it has slowed things down a
bit -- presumably because there are 16x more calls to get_range_slices.
15 matches
Mail list logo