Dear all
I am using Cassandra to retrieve a number of rows and columns stored in it.
Initially I had a 1 node cluster and I flooded it with data. I ran a Hector
code to retrieve data from it I got the following output:
Total number of rows in the database are 396
Total number of columns in the da
Dear all
I was originally having a 1 node cluster. Then I added one more node to it with
initial token configured appropriately. Now when I run my queries I am not
getting all my data ie all columns.
Output on 2 nodes
Time taken to retrieve columns 43707 of key range is 1276
Time taken to retri
I was told that the node bootstraps automatically in 1.1.0 version of
Cassandra. Please help me how to rectify the mistake
Prakrati Agrawal | Developer - Big Data(I&D)| 9731648376 | www.mu-sigma.com
From: Tyler Hobbs [mailto:ty...@datastax.com]
Sent: Wednesday, June 06, 2012 11:45 PM
To: user@ca
I have specified the consistency level as 1
Prakrati Agrawal | Developer - Big Data(I&D)| 9731648376 | www.mu-sigma.com
From: Poziombka, Wade L [mailto:wade.l.poziom...@intel.com]
Sent: Wednesday, June 06, 2012 11:11 PM
To: user@cassandra.apache.org
Subject: RE: Cassandra not retrieving the compl
What is the default replication factor? I did not set any replication factor.
Prakrati Agrawal | Developer - Big Data(I&D)| 9731648376 | www.mu-sigma.com
-Original Message-
From: Tim Wintle [mailto:timwin...@gmail.com]
Sent: Wednesday, June 06, 2012 5:42 PM
To: user@cassandra.apache.org
S
As the new node starts up I get this error before boostrap starts:
INFO 08:20:51,584 Enqueuing flush of Memtable-schema_columns@1493418651(0/0
serialized/live bytes, 1 ops)
INFO 08:20:51,584 Writing Memtable-schema_columns@1493418651(0/0
serialized/live bytes, 1 ops)
INFO 08:20:51,589 Completed
On Thu, Jun 7, 2012 at 5:41 AM, aaron morton wrote:
> Sounds good. Do you want to make the change ?
>
Done.
>
> Thanks for taking the time.
>
Thanks for giving the answer!
Jim
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 7/06/2
Is it possible to explicitly set a column value to null?
I see that if insert statement does not include a specific column, that column
comes up as null (assuming we are creating a record with new unique key).
But if we want to update a record, how we set it to null?
Another situation is when I
It's not a hard rule, you can put more data on a node. The 300GB to 400GB idea
is mostly concerned with operations, you may want to put less on a node due to
higher throughput demands.
(We are talking about the amount of data on a node, regardless of the RF).
On the operations side the consid
Sorry no dinamic snitch, but hinted handoff. Remember casaandra is
evently consistent
2012/6/8 ruslan usifov :
> Yes, for ONE you cant got inconsistent read in case when one of you
> nodes are die, and dinamyc snitch doesn't do it job
>
> 2012/6/7 Oleg Dulin :
>> We have a 3-node cluster. We use R
Yes, for ONE you cant got inconsistent read in case when one of you
nodes are die, and dinamyc snitch doesn't do it job
2012/6/7 Oleg Dulin :
> We have a 3-node cluster. We use RF of 3 and CL of ONE for both reads and
> writes…. Is there a reason I should schedule a regular nodetool repair job ?
>
We have a 3-node cluster. We use RF of 3 and CL of ONE for both reads
and writes…. Is there a reason I should schedule a regular nodetool
repair job ?
Thanks,
Oleg
I can't quite describe what happened, but essentially one day I found
that my column values that are supposed to be UTF-8 strings started
getting bogus characters.
Is there a known data corruption issue with 1.1 ?
We observed a JRE crash on one node in a seven node cluster about a half hour
after upgrading to version 1.1.1 yesterday. Immediately after the upgrade,
everything seemed to be working fine. The last item in the cassandra log was a
info-level notification that compaction had started on a data fi
Hi,
One of my 1.1.1 nodes doesn't restart due to stack overflow on building the
interval tree. Bumping the stack size doesn't help. Here's the stack trace:
https://gist.github.com/2889611
It looks more like an infinite loop on IntervalNode constructor's logic
than a deep tree since DEBUG log sho
nodetool ring showed 34.89GB load. Upgrading from 1.1.0. One small keyspace
with no compression, about 250MB. The rest taken by the second keyspace
with leveled compaction and snappy compressed.
The blade is an Intel(R) Xeon(R) CPU E5620 @ 2.40GHz with 6GB of RAM.
On Thu, Jun 7, 2012 at 2:52 AM,
Hello.
I am making some cassandra presentations in Kyiv and would like to check
that I am telling people truth :)
Could community tell me if next points are true:
1) Failed (from client-side view) operation may still be applied to cluster
2) Coordinator does not try anything to "roll-back" operati
Does this "max load" have correlation to replication factor?
IE a 3 node cluster with rf of 3. Should i be worried at {max load} X 3 or
what people generally mention the max load is?
On Thu, Jun 7, 2012 at 10:55 PM, Filippo Diotalevi wrote:
> Hi,
> one of latest Aaron's observation about the ma
Hi,
one of latest Aaron's observation about the max load per Cassandra node caught
my attention
> At ~840GB I'm probably running close
> to the max load I should have on a node,[AM] roughly 300GB to 400GB is the
> max load
Since we currently have a Cassandra node with roughly 330GB of data, it l
Hi
I can't find this in any documentation online, so just wanted to ask
Do all flush writers share the same flush queue or do they maintain
their separate queues..
Thanks
Rohit
Cassandra is not designed to run as a multi tenant database.
There have been some recent discussions on this, search the user group for more
detailed answers.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 7/06/2012, at 7:03 PM, MOHD AR
> of Cassandra 0.8.1
I would recommend upgrading to the latest 0.8 release there are a lot bug
fixes. (if not 1.0.10)
> Please help me that how I add new node in ring and its gets all update/data
> which lost in crash server.
Have you been working at CL QUOURM and running repair regularly?
Am
How much data do you have on the node ?
Was this a previously running system that was upgraded ?
> with disk_access_mode mmap_index_only and mmap I see OOM map failed error on
> SSTableBatchOpen thread
Do you have the stack trace from the log ?
> ERROR [CompactionExecutor:6] 2012-06-06 20:24:1
Sounds good. Do you want to make the change ?
Thanks for taking the time.
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 7/06/2012, at 7:54 AM, Jim Ancona wrote:
> On Tue, Jun 5, 2012 at 4:30 PM, Jim Ancona wrote:
> It might be a good idea fo
> I am now running major compactions on those nodes (and all is well so far).
Major compaction in this situation will make things worse. When end up with one
big file you will need that much space again to compact / upgrade / re-write
it.
> back down to a normal size, can I move all the data b
for 0.8
http://www.datastax.com/docs/0.8/operations/cluster_management#replacing-a-dead-node
On Thu, Jun 7, 2012 at 1:22 PM, rohit bhatia wrote:
> pardon me for assuming that ur new node was the same as the failed node..
>
> please see
> http://www.datastax.com/docs/1.0/operations/cluster_mana
pardon me for assuming that ur new node was the same as the failed node..
please see
http://www.datastax.com/docs/1.0/operations/cluster_management#replacing-a-dead-node
You should be able to proceed with the above link after
decommissioning the new node...
On Thu, Jun 7, 2012 at 1:12 PM, Adee
Hi,
I have done same and now its displayed three node in ring. How I remove
crashed node as well as what about data ?
root@zerg:~/apache-cassandra-0.8.1/bin# ./nodetool -h XXX.XX.XXX.XX ring
Address DC RackStatus State LoadOwns
Token
Restart cassandra on new node with autobootstrap as true, seed node as
the existing node in the cluster and an appropriate token...
You should not need to run nodetool repair as autobootstrap would take
care of it.
On Thu, Jun 7, 2012 at 12:22 PM, Adeel Akbar
wrote:
> Hi,
>
>
>
> I am running 2 n
On Thu, Jun 7, 2012 at 1:49 AM, sj.climber wrote:
> Looking at the data file directory, it's clear that the major compaction is
> progressing. However, I am unable to get stats on the compaction. More
> specifically, "nodetool -h host1 compactionstats" yields the following
> NullPointerException
Hi All,
I wanted to know how to use cassandra as a multi tenant .
Regards
Arshad
31 matches
Mail list logo