Perhaps other, more experienced and reputable contributors to this list can
comment but to be frank: Cassandra is probably not for you (at least for now).
I personally feel Cassandra is one of the stronger NoSQL options out there and
has the potential to become the defacto standard; but its not
On Fri, Dec 10, 2010 at 11:39 PM, Edward Capriolo wrote:
> On Thu, Dec 9, 2010 at 10:40 PM, Bill de hÓra wrote:
>>
>>
>> On Tue, 2010-12-07 at 21:25 -0500, Edward Capriolo wrote:
>>
>> The idea behind "micrandra" is for a 6 disk system run 6 instances of
>> Cassandra, one per disk. Use the RackAw
On Thu, Dec 9, 2010 at 10:40 PM, Bill de hÓra wrote:
>
>
> On Tue, 2010-12-07 at 21:25 -0500, Edward Capriolo wrote:
>
> The idea behind "micrandra" is for a 6 disk system run 6 instances of
> Cassandra, one per disk. Use the RackAwareSnitch to make sure no
> replicas live on the same node.
>
> Th
Or you can just start at the 1 + nth id given ids must be unique (you don't
have to specify an existing id as the start of a slice). You don't HAVE to
load the n + 1 record.
This (slightly) more optimal approach has the disadvantage that you don't
know with certainty when you have reached the
How about KeptCollections (backs by ZooKeeper)?
https://github.com/anthonyu/KeptCollections
Thanks,
Mubarak
On Fri, Dec 10, 2010 at 12:15 PM, Germán Kondolf
wrote:
> I don't know much about Zookeeper, but as far as I read, it is out of
> JVM process.
> Hazelcast is just a framework and you can
So you're actually getting n+1 record? Correct? So this is the right way to
do it?
On Sat, Dec 11, 2010 at 1:02 PM, Tyler Hobbs wrote:
> Yes, what you described is the correct way to do it. Your next slice will
> start with that 11th column.
>
> - Tyler
>
>
> On Fri, Dec 10, 2010 at 7:01 PM, J
Yes, what you described is the correct way to do it. Your next slice will
start with that 11th column.
- Tyler
On Fri, Dec 10, 2010 at 7:01 PM, Joshua Partogi wrote:
> Hi all,
>
> I am interested to see people's way to do record pagination with cassandra
> because I can not find anything like M
Hi all,
I am interested to see people's way to do record pagination with cassandra
because I can not find anything like MySQL LIMIT in cassandra.
>From what I understand you need to tell cassandra the Record ID for the
beginning of the slice and the number of record you want to get after that
Rec
what about https://github.com/suguru/cassandra-webconsole? any good?
On Fri, Dec 10, 2010 at 2:00 PM, Aaron Morton wrote:
> This is the only thing I can think of
> https://github.com/driftx/chiton
>
> Have not used it myself.
>
> Aaron
> On 11/12/2010, at 5:33 AM, Liangzhao Zeng
> wrote:
>
> >
This is the only thing I can think of
https://github.com/driftx/chiton
Have not used it myself.
Aaron
On 11/12/2010, at 5:33 AM, Liangzhao Zeng wrote:
> Is there any database viewer in cassandra to browser the content of the
> database, like what DB2 or oracle have?
>
>
> Thanks,
>
> Liang
Hi,
I have been reading up on Cassandra for the past few weeks and I am
highly impressed by the features it offers. At work, we are starting
work on a product that will handle several million CDR (Call Data
Record, basically can be thought of as a .CSV file) per day. We will
have to store the data
On Fri, Dec 10, 2010 at 12:49 PM, Alvin UW wrote:
> Hello,
>
>
> I got a consistency problem in Cassandra.
>
> Given a column family with a record: Id Name
> 1 David
>
> There are three backups for this column family.
>
> Assume there
> Assume there are two write operation happens issued by the same application
> by this order: write_one("1", "Dan") ; write_one("1", "Ken").
> What will Read_all("1") get?
Assuming read_all means reading at consistency level ALL, it sees the
latest value ("Ken").
> Assume the above two write ope
Hello,
I got a consistency problem in Cassandra.
Given a column family with a record:Id Name
1David
There are three backups for this column family.
Assume there are two write operation happens issued by the same application
by t
As far as I know they are adding a clusterized semaphore on next version.
On Fri, Dec 10, 2010 at 5:15 PM, Germán Kondolf wrote:
> I don't know much about Zookeeper, but as far as I read, it is out of
> JVM process.
> Hazelcast is just a framework and you can programmatically start and
> shutdow
I don't know much about Zookeeper, but as far as I read, it is out of
JVM process.
Hazelcast is just a framework and you can programmatically start and
shutdown the cluster, it's just an xml to configure it.
Hazelcast also provides good caching features to integrate with
Hibernate, distributed exe
thx for the feedback. regarding locking, has anyone done a comparison
to zookeeper? does zookeeper provide functionality over hazelcast?
On 12/10/2010 11:08 AM, Norman Maurer wrote:
Hi there,
I'm not using it atm but plan to in my next project. It really looks nice :)
Bye,
Norman
2010/12/1
Hi there,
I'm not using it atm but plan to in my next project. It really looks nice :)
Bye,
Norman
2010/12/10 Germán Kondolf :
> Hi, I'm using it as a complement of cassandra, to avoid "duplicate"
> searches and duplicate content in a given moment in time.
> It works really nice by now, no criti
Hi, I'm using it as a complement of cassandra, to avoid "duplicate"
searches and duplicate content in a given moment in time.
It works really nice by now, no critical issues, at least the
functionallity I'm using from it.
--
//GK
german.kond...@gmail.com
// sites
http://twitter.com/germanklf
http
> Over the past month or so, it looks like memory has slowly
> been exhausted. Both nodetool drain and jmap can't run, and
> produce this error:
>
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
>
> We've got Xmx/Xms set to 4GB.
>
> top sh
http://www.hazelcast.com/product.jsp
has anyone tested hazelcast as a distributed locking mechanism for java
clients? seems very attractive on the surface.
> That's finally a precise statement! :) I was wondering what " to at least 1
> replica's commit log" is supposed to actually mean:
> http://wiki.apache.org/cassandra/API
The main idea is that it has been "officially delivered" to one
replicate. If Cassandra only did batch-wise commit such that
Howdi,
We're using Cassandra 0.6.6 - intending to wait until 0.7 before
we do any more upgrades.
We're running a cluster of 16 boxes of 7.1GB each, on Amazon EC2
using Ubuntu 10.04 (LTS).
Today we saw one box kick its little feet up, and after investigating
the other machines, it looks li
> If you switch your writes to CL ONE when a failure occurs, you might as well
> use ONE for all writes. ONE and QUORUM behave the same when all nodes are
> working correctly.
Consistency wise yes, but not durability wise. Writing to QUOROM but
reading at ONE is useful if you want higher durabili
Is there any database viewer in cassandra to browser the content of the
database, like what DB2 or oracle have?
Thanks,
Liangzhao
Thanks for your help Peter.
We gave up and rolled back to our mysql implementation (we did all writes to
our old store in parallel so we did not lose anything).
Problem was that every solution we came up with would require at least on major
compaction before the new nodes could join and our clus
Recent versions of Cassandra (I noticed it in RC1, which is what I am using)
add a very cool "estimateKeys" JMX operation for each column family. I have
a quick question: Is the value an estimate for the number of keys on a
particular node or for the entire cluster?
I have a 5 node cluster (RF=
On Fri, 2010-12-10 at 10:17 +0100, Massimo Carro wrote:
> unsubscribe
http://wiki.apache.org/cassandra/FAQ#unsubscribe
--
Eric Evans
eev...@rackspace.com
RPM is available from http://rpm.riptano.com
Artifacts for maven folks are available as well:
http://mvn.riptano.com/content/repositories/riptano/
-Nate
On Fri, Dec 10, 2010 at 11:00 AM, Eric Evans wrote:
>
> I'd have thought all that turkey and stuffing would have done more
> damage to momentu
On Dec 9, 2010, at 18:50, Tyler Hobbs wrote:
> If you switch your writes to CL ONE when a failure occurs, you might as well
> use ONE for all writes. ONE and QUORUM behave the same when all nodes are
> working correctly.
That's finally a precise statement! :) I was wondering what " to at leas
unsubscribe
Massimo Carro
www.liquida.it - www.liquida.com
31 matches
Mail list logo