Hi Traian,
There is your problem. You are using RF=1, meaning that each node is
responsible for its range, and nothing more. So when a node goes down, do
the math, you just can't read 1/5 of your data.
This is very cool for performances since each node owns its own part of the
data and any write
Hi,
I'd like to release version 1.2.1-1 of Mojo's Cassandra Maven Plugin
to sync up with the 1.2.1 release of Apache Cassandra.
We solved 1 issues:
http://jira.codehaus.org/secure/ReleaseNote.jspa?projectId=12121version=19089
Staging Repository:
I will let commiters or anyone that has knowledge on Cassandra internal
answer this.
From what I understand, you should be able to insert data on any up node
with your configuration...
Alain
2013/2/14 Traian Fratean traian.frat...@gmail.com
You're right as regarding data availability on that
On Thu, Feb 14, 2013 at 1:36 PM, Muntasir Raihan Rahman
muntasir.rai...@gmail.com wrote:
Hi,
I am trying to run cassandra on a 10 node cluster. But I keep getting this
error: Line 1 = Keyspace names must be case-insensitively unique
(usertable conflicts with usertable).
*When are you
Hi all,\
According to
http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-windows-service-new-cql-clients-and-more
running cassandra.bat install should make cassandra run on a service on a
windows box. However I'm getting the following when I try:
Using multithreading, inserting 2000 per thread, resulted in no throughput
increase. Each thread is taking about 4 seconds per, indicating a bottleneck
elsewhere.
Ken
- Original Message -
From: Tyler Hobbs ty...@datastax.com
To: user@cassandra.apache.org
Sent: Wednesday,
it could be the instances are IO limited.
I've been running benchmarks with Cassandra 1.1.9 the last 2 weeks on
a AMD FX 8 core with 32GB of ram.
with 24 threads I get roughly 20K inserts per second. each insert is
only about 100-150 bytes.
On Thu, Feb 14, 2013 at 8:07 AM, ka...@comcast.net
Hello,
We just upgraded from 1.1.2-1.1.9. We utilize the byte ordered
partitioner (we generate our own hashes). We have not yet upgraded
sstables.
Before the upgrade, we had a balanced ring.
After the upgrade, we see:
10.0.4.22 us-east 1a Up Normal 77.66 GB
Hi - I am doing a load test using YCSB across 2 nodes in a cluster and seeing a
lot of mutation dropped messages. I understand that this is due to the replica
not being written to the
other node ? RF = 2, CL =1.
From the wiki -
For MUTATION messages this means that the mutation was not applied
Those are good suggestions guys. I'm using Java 7 and this is my first
install of C* so looks like it might be genuine.
From what I understand this is a minor issue that doesn't affect the
functionality, correct? If not I should prob download a prev version of C*
or build my own...
Have filed a
Alain,
I found out that the client node is an m1.small, and the cassandra nodes are
m1.large.
This is what is contained in each row: {dev1-dc1r-redir-0.unica.net/B9tk:
{batchID: 2486272}}. Not a whole lot of data.
If you don't use EBS, how is data persistence then maintained in the event
A m1.small will probably be unable to maximize throughput on your m1.large
cluster.
If you don't use EBS, how is data persistence then maintained in the event
that an instance goes down for whatever reason?
You answered by yourself earlier in this thread : I'm writing to a column
family in a
I second these questions: we've been looking into changing some of our CFs
to use leveled compaction as well. If anybody here has the wisdom to answer
them it would be of wonderful help.
Thanks
Charles
On Wed, Feb 13, 2013 at 7:50 AM, Mike mthero...@yahoo.com wrote:
Hello,
I'm investigating
Generally data isn't written to whatever node the client connects to. In
your case, a row is written to one of the nodes based on the hash of the
row key. If that one replica node is down, it won't matter which
coordinator node you attempt a write with CL.ONE: the write will fail.
If you want
This is new.
On Thu, Feb 14, 2013 at 9:24 AM, Muntasir Raihan Rahman
muntasir.rai...@gmail.com wrote:
--
Best Regards
Muntasir Raihan Rahman
Email: muntasir.rai...@gmail.com
Phone: 1-217-979-9307
Department of Computer Science,
University of Illinois Urbana Champaign,
3111 Siebel
i was hoping for a rick roll.
On 14 February 2013 16:55, Eric Evans eev...@acunu.com wrote:
This is new.
On Thu, Feb 14, 2013 at 9:24 AM, Muntasir Raihan Rahman
muntasir.rai...@gmail.com wrote:
--
Best Regards
Muntasir Raihan Rahman
Email: muntasir.rai...@gmail.com
Phone:
I apologize for this silly mistake!
Thanks
Muntasir.
On Thu, Feb 14, 2013 at 11:01 AM, Andy Twigg andy.tw...@gmail.com wrote:
i was hoping for a rick roll.
On 14 February 2013 16:55, Eric Evans eev...@acunu.com wrote:
This is new.
On Thu, Feb 14, 2013 at 9:24 AM, Muntasir Raihan Rahman
Thanks Aaron and Manu.
Since we are using 1.1, there is no num_taken parameter. when I upgrade to
1.2, should I set num_token=1 to start up, or I can set to other numbers?
Daning
On Tue, Feb 12, 2013 at 3:45 PM, Manu Zhang owenzhang1...@gmail.com wrote:
num_tokens is only used at
From:
http://www.datastax.com/docs/1.2/configuration/node_configuration#num-tokens
About num_tokens: If left unspecified, Cassandra uses the default value of
1 token (for legacy compatibility) and uses the initial_token. If you
already have a cluster with one token per node, and wish to migrate
From the exception, looks like astyanax didn't even try to call Cassandra. My
guess would be astyanax is token aware, it detects the node is down and it
doesn't even try. If you use Hector, it might try to write since it's not
token aware. But As Byran said, it eventually will fail. I guess
Hi - Is there a parameter which can be tuned to prevent the mutations from
being dropped ? Is this logic correct ?
Node A and B with RF=2, CL =1. Load balanced between the two.
-- Address Load Tokens Owns (effective) Host ID
Rack
UN 10.x.x.x
I haven't tried to switch compaction strategy. We started with LCS.
For us, after massive data imports (5000 w/seconds for 6 days), the first
repair is painful since there is quite some data inconsistency. For 150G nodes,
repair brought in about 30 G and created thousands of pending
BTW, when I say major compaction, I mean running the nodetool compact
command (which does a major compaction for Sized Tiered Compaction). I didn't
see the distribution of SSTables I expected until I ran that command, in the
steps I described below.
-Mike
On Feb 14, 2013, at 3:51 PM, Wei
Thanks! suppose I can upgrade to 1.2.x with 1 token by commenting out
num_tokens, how can I changed to multiple tokens? could not find doc
clearly stating about this.
On Thu, Feb 14, 2013 at 10:54 AM, Alain RODRIGUEZ arodr...@gmail.comwrote:
From:
Hi Guys,
What's the syntax for multiget_slice in CQL3? How about multiget_count?
-- Drew
I'm confused what you are looking to do.
CQL3 syntax (SELECT * FROM keyspace.cf WHERE user = 'cooldude') has
nothing to do with thrift client calls (such as multiget_slice)
What is your goal here?
Best,
michael
On 2/14/13 5:57 PM, Drew Kutcharian d...@venarc.com wrote:
Hi Guys,
What's the
The equivalent of multget slice is
select * from table where primary_key in ('that', 'this', 'the other thing')
Not sure if you can count these in a way that makes sense since you
can not group.
On Thu, Feb 14, 2013 at 9:17 PM, Michael Kjellman
mkjell...@barracuda.com wrote:
I'm confused what
I have been looking at incremental backups and snapshots. I have done some
experimentation but could not come to a conclusion. Can somebody please help me
understanding it right?
/data is my data partition
With incremental_backup turned OFF in Cassandra.yaml - Are all SSTables are
under
+1 =)
2013/2/14 Stephen Connolly stephen.alan.conno...@gmail.com
Hi,
I'd like to release version 1.2.1-1 of Mojo's Cassandra Maven Plugin
to sync up with the 1.2.1 release of Apache Cassandra.
We solved 1 issues:
Thanks Edward. I assume I can still do a column slice using WHERE in case of
wide rows. I wonder if the multiget count is the only thing that you can do
using thrift but not CQL3.
On Feb 14, 2013, at 6:35 PM, Edward Capriolo edlinuxg...@gmail.com wrote:
The equivalent of multget slice is
30 matches
Mail list logo