I'm not sure what problem you're trying to solve. The exception you
pasted should stop once your clients are no longer trying to use the
dropped CF.
On Sat, Aug 20, 2011 at 10:09 PM, Yan Chunlu wrote:
> that could be the reason, I did nodetool repair(unfinished, data size
> increased 6 times big
that could be the reason, I did nodetool repair(unfinished, data size
increased 6 times bigger 30G vs 170G) and there should be some unclean
sstables on that node.
however upgrade it a tough work for me right now. could the nodetool scrub
help? or decommission the node and join it again?
On Su
This means you should upgrade, because we've fixed bugs about ignoring
deleted CFs since 0.7.4.
On Fri, Aug 19, 2011 at 9:26 AM, Yan Chunlu wrote:
> the log file shows as follows, not sure what does 'Couldn't find cfId=1000'
> means(google just returned useless results):
>
> INFO [main] 2011-08-1
Can you post the complete Cassandra log starting with the initial
start-up of the node after having removed schema/migrations?
--
/ Peter Schuller (@scode on twitter)
> the log file shows as follows, not sure what does 'Couldn't find cfId=1000'
> means(google just returned useless results):
Those should be the indication that the schema is wrong on the node.
Reads and writes are being received from other nodes pertaining to
column families it does not know abou
any suggestion? thanks!
On Fri, Aug 19, 2011 at 10:26 PM, Yan Chunlu wrote:
> the log file shows as follows, not sure what does 'Couldn't find cfId=1000'
> means(google just returned useless results):
>
>
> INFO [main] 2011-08-18 07:23:17,688 DatabaseDescriptor.java (line 453)
> Found table data
the log file shows as follows, not sure what does 'Couldn't find cfId=1000'
means(google just returned useless results):
INFO [main] 2011-08-18 07:23:17,688 DatabaseDescriptor.java (line 453) Found
table data in data directories. Consider using JMX to call
org.apache.cassandra.service.StorageServ
Look in the logs to work find out why the migration did not get to node2.
Otherwise yes you can drop those files.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 18/08/2011, at 11:25 PM, Yan Chunlu wrote:
> just found out that ch
just found out that changes via cassandra-cli, the schema change didn't
reach node2. and node2 became unreachable
I did as this document:
http://wiki.apache.org/cassandra/FAQ#schema_disagreement
but after that I just got two schema versons:
ddcada52-c96a-11e0-99af-3bd951658d61: [node1, nod
thanks a lot for all the help! I have gone through the steps and
successfully brought up the node2 :)
On Thu, Aug 18, 2011 at 10:51 AM, Boris Yen wrote:
> Because the file only preserve the "key" of records, not the whole record.
> Records for those saved key will be loaded into cassandra durin
Because the file only preserve the "key" of records, not the whole record.
Records for those saved key will be loaded into cassandra during the startup
of cassandra.
On Wed, Aug 17, 2011 at 5:52 PM, Yan Chunlu wrote:
> but the data size in the saved_cache are relatively small:
>
> will that caus
but the data size in the saved_cache are relatively small:
will that cause the load problem?
ls -lh /cassandra/saved_caches/
total 32M
-rw-r--r-- 1 cass cass 2.9M 2011-08-12 19:53 cass-CommentSortsCache-KeyCache
-rw-r--r-- 1 cass cass 2.9M 2011-08-17 04:29 cass-CommentSortsCache-RowCache
-rw-r
If you have a node that cannot start up due to issues loading the saved cache
delete the files in the saved_cache directory before starting it.
The settings to save the row and key cache are per CF. You can change them with
an update column family statement via the CLI when attached to any node
does this need to be cluster wide? or I could just modify the caches
on one node? since I could not connect to the node with
cassandra-cli, it says "connection refused"
[default@unknown] connect node2/9160;
Exception connecting to node2/9160. Reason: Connection refused.
so if I change the cac
Hi,
yes, we saw exactly the same messages. We got rid of these by doing the
following:
* Set all row & key caches in your CFs to 0 via cassandra-cli
* Kill Cassandra
* Remove all files in the saved_caches directory
* Start Cassandra
* Slowly bring back row & key caches (if desired, we left them
the logs say it took a long time to read a saved row cache. Try removing the
files from the saved_caches dir as Jonathan suggested.
The collecting log lines with the INT max count are indicative of the
IdentityQueryFilter. One of the places it is used is when adding rows to the
cache.
Cheers
I saw alot slicequeryfilter things if changed the log level to DEBUG. just
thought even bring up a new node will be faster than start the old one.
it is wired
DEBUG [main] 2011-08-16 06:32:49,213 SliceQueryFilter.java (line 123)
collecting 0 of 2147483647: 76616c7565:false:225@13130688454743
but it seems the row cache is cluster wide, how will the change of row
cache affect the read speed?
On Mon, Aug 15, 2011 at 7:33 AM, Jonathan Ellis wrote:
> Or leave row cache enabled but disable cache saving (and remove the
> one already on disk).
>
> On Sun, Aug 14, 2011 at 5:05 PM, aaron mor
Or leave row cache enabled but disable cache saving (and remove the
one already on disk).
On Sun, Aug 14, 2011 at 5:05 PM, aaron morton wrote:
> INFO [main] 2011-08-14 09:24:52,198 ColumnFamilyStore.java (line 547)
> completed loading (1744370 ms; 20 keys) row cache for COMMENT
>
> It's taki
> INFO [main] 2011-08-14 09:24:52,198 ColumnFamilyStore.java (line 547)
> completed loading (1744370 ms; 20 keys) row cache for COMMENT
It's taking 29 minutes to load 200,000 rows in the row cache. Thats a pretty
big row cache, I would suggest reducing or disabling it.
Background
http://w
I got 3 nodes and RF=3, when I repairing ndoe3, it seems alot data
generated. and server can not afford the load then crashed.
after come back, node 3 can not return for more than 96 hours
for 34GB data, the node 2 could restart and back online within 1 hour.
I am not sure what's wrong with node
21 matches
Mail list logo