But now when you set to 0 that index row will get very wide as it collects
everything completed. You may want to consider deleting the indexed column
for completed rows when done.
Cassandra is not a great queue to use with built in indexes. Yo cold write
your own index here and potentially do be
We do not have a central point to do such work yet, but it seems this is
only way to do it a little bit more efficiently. Thanks.
Best Regards!
Jian Jin
2012/4/5 aaron morton
> You cannot set the the TTL without also setting the column value.
>
> Could you keep a record of future deletes in
it would be really helpfull if leveled compaction prints level into syslog.
Demo:
INFO [CompactionExecutor:891] 2012-04-05 22:39:27,043
CompactionTask.java (line 113) Compacting ***LEVEL 1***
[SSTableReader(path='/var/lib/cassandra/data/rapidshare/querycache-hc-19690-Data.db'),
SSTableReader(
In your case, cassandra will read the data from the nearest node, and read
digest from other two nodes.
When those read meet requested consistency level, cassandra will return the
result.
maki
From iPhone
On 2012/04/06, at 1:22, zhiming shen wrote:
> Thanks for your reply. My question is ab
Thanks for all the help everyone. The values were meant to be binary. I ended
making the possible values between 0 and 50 instead of just 0 or 1. That way
no single index row gets that wide. I now run queries for everything from 1 to
50 to get 'queued' items and set the value to 0 when I'm d
On 04/05/2012 03:19 PM, ruslan usifov wrote:
Hello
It's looks that cassandra 1.0.x is stable, and have interesting things
like offheap memtables and row cashes, so we want to upgrade to 1.0
version from 0.8. Is it possible to do without cluster downtime (while
we upgrade all nodes)? I mean fo
Hello
It's looks that cassandra 1.0.x is stable, and have interesting things like
offheap memtables and row cashes, so we want to upgrade to 1.0 version from
0.8. Is it possible to do without cluster downtime (while we upgrade all
nodes)? I mean follow: when we begin upgrade at some point in worki
Will 1500 bytes row size be large or small for Cassandra from your
understanding?
performance degradation starts at 500MB rows, its very slow if you hit
this limit.
Hi,
I'm experiencing a steady growth in resident size of JVM running
Cassandra 1.0.7. I disabled JNA and off-heap row cache, tested with
and without mlockall disabling paging, and upgraded to JRE 1.6.0_31 to
prevent this bug [1] to leak memory. Still JVM's resident set size
grows steadily. A proce
Just like the Oracle Store Procedure.
2012/3/26 Data Craftsman :
> Howdy,
>
> Some Polyglot Persistence(NoSQL) products started support server side
> scripting, similar to RDBMS store procedure.
> E.g. Redis Lua scripting.
>
> I wish it is Python when Cassandra has the server side scripting featur
Hi all,
We are using Hector and ofter we see lots of timeout exception in the log,
I know that the hector can failover to other node, but I want to reduce the
number of timeouts.
any hector parameter I should change to reduce this error?
also, on the server side, any kind of tunning need to do f
On Thu, Apr 5, 2012 at 9:22 AM, zhiming shen wrote:
> Thanks for your reply. My question is about the impact of replication on
> load balancing. Say we have nodes ABCD... in the ring. ReplicationFactor is
> 3 so the data on A will also have replicas on B and C. If we are reading
> data own by A, a
Thanks for your reply. My question is about the impact of replication on
load balancing. Say we have nodes ABCD... in the ring. ReplicationFactor is
3 so the data on A will also have replicas on B and C. If we are reading
data own by A, and A is already very busy, will the requests be forwarded
to
Hi All,
I'm experiencing the following errors while bulk loading data into a cluster
ERROR [Thread-23] 2012-04-05 09:58:12,252 AbstractCassandraDaemon.java
(line 139) Fatal exception in thread Thread[Thread-23,5,main]
java.lang.RuntimeException: Insufficient disk space to flush
781359405649475491
Sun or Open JDK ?
Either way I would suggest upgrading to the latest JDK, upgrading cassandra to
1.0.8 and running nodetool upgradetables.
If the fault persists after that I would look at IO or memory issues.
Hope that helps.
-
Aaron Morton
Freelance Developer
@aaronmorton
You cannot set the the TTL without also setting the column value.
Could you keep a record of future deletes in a CF and then action them as a
batch process ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 5/04/2012, at 2:00 PM, 金剑 wrote
What OS are you using
FreeBSD 8.3 64 bit PRERELEASE
17 matches
Mail list logo