Thank you for the reply! I am not trying to read a row with too many columns
into memory, the lock I am experiencing is write-related only and happening
for everything added prior to an unknown event.

I just ran into the same thing again and the column count is maybe not the
real issue here (as I thought when writing the initial mail, too quickly
assumed the wrong thing, sorry!), as this also happened after reducing the
ID tracking rows to a maximum of 1.000 columns a row.

I have just inserted about 30.000 rows in the last hour and I counted the
values in a row too to keeping track of the number of data. It was
read,incremented and updated for each new insert.

Suddenly it was no longer updated while new data was still added.

I used the cassandra-cli to try to delete that example row manually to be
sure that the reason was not in my application:

[default@TestKS] get
CFTest['44656661756c747c65333332356231342d373937392d313165302d613663382d3132333133633033336163347c5461626c65737c5765625369746573'];
=> (column=count, value=3331353030, timestamp=1464439894)
=> (column=split, value=3334, timestamp=1464439894)
[default@TestKS] del
CFTest['44656661756c747c65333332356231342d373937392d313165302d613663382d3132333133633033336163347c5461626c65737c5765625369746573'];

row removed.
[default@TestKS] get
CFTest['44656661756c747c65333332356231342d373937392d313165302d613663382d3132333133633033336163347c5461626c65737c5765625369746573'];
=> (column=count, value=3331353030, timestamp=1464439894)
=> (column=split, value=3334, timestamp=1464439894)

… and it won't go away or be overwritten by new a new "set".


What could be the reason for this or how can I identify the reason for this?

Thanks,
 Mario



2011/6/4 Jonathan Ellis <jbel...@gmail.com>

> It sounds like you're trying to read entire rows at once. Past a
> certain point (depending on your heap size) you won't be able to do
> that, you need to "page" through them N columns at a time.
>
>

Reply via email to